00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 1815 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3076 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.169 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.171 The recommended git tool is: git 00:00:00.171 using credential 00000000-0000-0000-0000-000000000002 00:00:00.172 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.203 Fetching changes from the remote Git repository 00:00:00.204 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.224 Using shallow fetch with depth 1 00:00:00.224 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.224 > git --version # timeout=10 00:00:00.247 > git --version # 'git version 2.39.2' 00:00:00.247 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.247 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.247 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.802 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.812 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.823 Checking out Revision 71481c63295b6b9f0ecef6c6e69e033a6109160a (FETCH_HEAD) 00:00:06.823 > git config core.sparsecheckout # timeout=10 00:00:06.833 > git read-tree -mu HEAD # timeout=10 00:00:06.847 > git checkout -f 71481c63295b6b9f0ecef6c6e69e033a6109160a # timeout=5 00:00:06.862 Commit message: "jenkins/jjb-config: Disable bsc job until further notice" 00:00:06.863 > git rev-list --no-walk 71481c63295b6b9f0ecef6c6e69e033a6109160a # timeout=10 00:00:06.970 [Pipeline] Start of Pipeline 00:00:06.980 [Pipeline] library 00:00:06.982 Loading library shm_lib@master 00:00:06.982 Library shm_lib@master is cached. Copying from home. 00:00:06.997 [Pipeline] node 00:00:07.015 Running on VM-host-SM9 in /var/jenkins/workspace/nvme-vg-autotest 00:00:07.016 [Pipeline] { 00:00:07.028 [Pipeline] catchError 00:00:07.030 [Pipeline] { 00:00:07.043 [Pipeline] wrap 00:00:07.053 [Pipeline] { 00:00:07.059 [Pipeline] stage 00:00:07.060 [Pipeline] { (Prologue) 00:00:07.073 [Pipeline] echo 00:00:07.073 Node: VM-host-SM9 00:00:07.077 [Pipeline] cleanWs 00:00:07.084 [WS-CLEANUP] Deleting project workspace... 00:00:07.084 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.089 [WS-CLEANUP] done 00:00:07.266 [Pipeline] setCustomBuildProperty 00:00:07.346 [Pipeline] nodesByLabel 00:00:07.347 Found a total of 1 nodes with the 'sorcerer' label 00:00:07.356 [Pipeline] httpRequest 00:00:07.360 HttpMethod: GET 00:00:07.360 URL: http://10.211.164.101/packages/jbp_71481c63295b6b9f0ecef6c6e69e033a6109160a.tar.gz 00:00:07.361 Sending request to url: http://10.211.164.101/packages/jbp_71481c63295b6b9f0ecef6c6e69e033a6109160a.tar.gz 00:00:07.375 Response Code: HTTP/1.1 200 OK 00:00:07.376 Success: Status code 200 is in the accepted range: 200,404 00:00:07.376 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_71481c63295b6b9f0ecef6c6e69e033a6109160a.tar.gz 00:00:13.166 [Pipeline] sh 00:00:13.446 + tar --no-same-owner -xf jbp_71481c63295b6b9f0ecef6c6e69e033a6109160a.tar.gz 00:00:13.464 [Pipeline] httpRequest 00:00:13.469 HttpMethod: GET 00:00:13.469 URL: http://10.211.164.101/packages/spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:00:13.470 Sending request to url: http://10.211.164.101/packages/spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:00:13.494 Response Code: HTTP/1.1 200 OK 00:00:13.495 Success: Status code 200 is in the accepted range: 200,404 00:00:13.495 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:01:16.393 [Pipeline] sh 00:01:16.671 + tar --no-same-owner -xf spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:01:19.220 [Pipeline] sh 00:01:19.501 + git -C spdk log --oneline -n5 00:01:19.501 36faa8c31 bdev/nvme: Fix the case that namespace was removed during reset 00:01:19.501 e2cb5a5ee bdev/nvme: Factor out nvme_ns active/inactive check into a helper function 00:01:19.501 4b134b4ab bdev/nvme: Delay callbacks when the next operation is a failover 00:01:19.501 d2ea4ecb1 llvm/vfio: Suppress checking leaks for `spdk_nvme_ctrlr_alloc_io_qpair` 00:01:19.501 3b33f4333 test/nvme/cuse: Fix typo 00:01:19.524 [Pipeline] writeFile 00:01:19.540 [Pipeline] sh 00:01:19.867 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:19.879 [Pipeline] sh 00:01:20.161 + cat autorun-spdk.conf 00:01:20.161 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.161 SPDK_TEST_NVME=1 00:01:20.161 SPDK_TEST_FTL=1 00:01:20.162 SPDK_TEST_ISAL=1 00:01:20.162 SPDK_RUN_ASAN=1 00:01:20.162 SPDK_RUN_UBSAN=1 00:01:20.162 SPDK_TEST_XNVME=1 00:01:20.162 SPDK_TEST_NVME_FDP=1 00:01:20.162 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:20.169 RUN_NIGHTLY=1 00:01:20.171 [Pipeline] } 00:01:20.187 [Pipeline] // stage 00:01:20.205 [Pipeline] stage 00:01:20.208 [Pipeline] { (Run VM) 00:01:20.222 [Pipeline] sh 00:01:20.503 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:20.503 + echo 'Start stage prepare_nvme.sh' 00:01:20.503 Start stage prepare_nvme.sh 00:01:20.503 + [[ -n 1 ]] 00:01:20.503 + disk_prefix=ex1 00:01:20.503 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:20.503 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:20.503 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:20.503 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.503 ++ SPDK_TEST_NVME=1 00:01:20.503 ++ SPDK_TEST_FTL=1 00:01:20.503 ++ SPDK_TEST_ISAL=1 00:01:20.503 ++ SPDK_RUN_ASAN=1 00:01:20.503 ++ SPDK_RUN_UBSAN=1 00:01:20.503 ++ SPDK_TEST_XNVME=1 00:01:20.503 ++ SPDK_TEST_NVME_FDP=1 00:01:20.503 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:20.503 ++ RUN_NIGHTLY=1 00:01:20.503 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:20.503 + nvme_files=() 00:01:20.503 + declare -A nvme_files 00:01:20.503 + backend_dir=/var/lib/libvirt/images/backends 00:01:20.503 + nvme_files['nvme.img']=5G 00:01:20.503 + nvme_files['nvme-cmb.img']=5G 00:01:20.503 + nvme_files['nvme-multi0.img']=4G 00:01:20.503 + nvme_files['nvme-multi1.img']=4G 00:01:20.503 + nvme_files['nvme-multi2.img']=4G 00:01:20.503 + nvme_files['nvme-openstack.img']=8G 00:01:20.503 + nvme_files['nvme-zns.img']=5G 00:01:20.503 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:20.503 + (( SPDK_TEST_FTL == 1 )) 00:01:20.503 + nvme_files["nvme-ftl.img"]=6G 00:01:20.503 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:20.503 + nvme_files["nvme-fdp.img"]=1G 00:01:20.503 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:20.503 + for nvme in "${!nvme_files[@]}" 00:01:20.503 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:01:20.503 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:20.503 + for nvme in "${!nvme_files[@]}" 00:01:20.503 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-ftl.img -s 6G 00:01:20.763 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:20.763 + for nvme in "${!nvme_files[@]}" 00:01:20.763 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:01:20.763 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:20.763 + for nvme in "${!nvme_files[@]}" 00:01:20.763 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:01:20.763 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:20.763 + for nvme in "${!nvme_files[@]}" 00:01:20.763 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:01:21.023 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:21.023 + for nvme in "${!nvme_files[@]}" 00:01:21.023 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:01:21.023 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:21.023 + for nvme in "${!nvme_files[@]}" 00:01:21.023 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:01:21.023 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:21.023 + for nvme in "${!nvme_files[@]}" 00:01:21.023 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-fdp.img -s 1G 00:01:21.284 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:21.284 + for nvme in "${!nvme_files[@]}" 00:01:21.284 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:01:21.544 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:21.544 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:01:21.544 + echo 'End stage prepare_nvme.sh' 00:01:21.544 End stage prepare_nvme.sh 00:01:21.555 [Pipeline] sh 00:01:21.834 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:21.834 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex1-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora38 00:01:21.834 00:01:21.834 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:21.834 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:21.834 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:21.834 HELP=0 00:01:21.834 DRY_RUN=0 00:01:21.834 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,/var/lib/libvirt/images/backends/ex1-nvme-fdp.img, 00:01:21.834 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:21.834 NVME_AUTO_CREATE=0 00:01:21.834 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,, 00:01:21.834 NVME_CMB=,,,, 00:01:21.834 NVME_PMR=,,,, 00:01:21.834 NVME_ZNS=,,,, 00:01:21.834 NVME_MS=true,,,, 00:01:21.834 NVME_FDP=,,,on, 00:01:21.834 SPDK_VAGRANT_DISTRO=fedora38 00:01:21.834 SPDK_VAGRANT_VMCPU=10 00:01:21.834 SPDK_VAGRANT_VMRAM=12288 00:01:21.834 SPDK_VAGRANT_PROVIDER=libvirt 00:01:21.834 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:21.834 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:21.834 SPDK_OPENSTACK_NETWORK=0 00:01:21.834 VAGRANT_PACKAGE_BOX=0 00:01:21.834 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:21.834 FORCE_DISTRO=true 00:01:21.834 VAGRANT_BOX_VERSION= 00:01:21.834 EXTRA_VAGRANTFILES= 00:01:21.834 NIC_MODEL=e1000 00:01:21.834 00:01:21.834 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt' 00:01:21.834 /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:25.121 Bringing machine 'default' up with 'libvirt' provider... 00:01:25.121 ==> default: Creating image (snapshot of base box volume). 00:01:25.379 ==> default: Creating domain with the following settings... 00:01:25.379 ==> default: -- Name: fedora38-38-1.6-1705279005-2131_default_1715488892_350c0da642965362a598 00:01:25.379 ==> default: -- Domain type: kvm 00:01:25.379 ==> default: -- Cpus: 10 00:01:25.379 ==> default: -- Feature: acpi 00:01:25.379 ==> default: -- Feature: apic 00:01:25.379 ==> default: -- Feature: pae 00:01:25.379 ==> default: -- Memory: 12288M 00:01:25.379 ==> default: -- Memory Backing: hugepages: 00:01:25.379 ==> default: -- Management MAC: 00:01:25.379 ==> default: -- Loader: 00:01:25.379 ==> default: -- Nvram: 00:01:25.379 ==> default: -- Base box: spdk/fedora38 00:01:25.379 ==> default: -- Storage pool: default 00:01:25.379 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1705279005-2131_default_1715488892_350c0da642965362a598.img (20G) 00:01:25.379 ==> default: -- Volume Cache: default 00:01:25.379 ==> default: -- Kernel: 00:01:25.379 ==> default: -- Initrd: 00:01:25.379 ==> default: -- Graphics Type: vnc 00:01:25.379 ==> default: -- Graphics Port: -1 00:01:25.379 ==> default: -- Graphics IP: 127.0.0.1 00:01:25.379 ==> default: -- Graphics Password: Not defined 00:01:25.379 ==> default: -- Video Type: cirrus 00:01:25.379 ==> default: -- Video VRAM: 9216 00:01:25.379 ==> default: -- Sound Type: 00:01:25.379 ==> default: -- Keymap: en-us 00:01:25.379 ==> default: -- TPM Path: 00:01:25.379 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:25.379 ==> default: -- Command line args: 00:01:25.379 ==> default: -> value=-device, 00:01:25.379 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:25.379 ==> default: -> value=-drive, 00:01:25.379 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:25.379 ==> default: -> value=-device, 00:01:25.379 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:25.379 ==> default: -> value=-device, 00:01:25.379 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:25.379 ==> default: -> value=-drive, 00:01:25.379 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-1-drive0, 00:01:25.379 ==> default: -> value=-device, 00:01:25.379 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:25.379 ==> default: -> value=-device, 00:01:25.379 ==> default: -> value=nvme,id=nvme-2,serial=12342, 00:01:25.379 ==> default: -> value=-drive, 00:01:25.379 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:25.379 ==> default: -> value=-device, 00:01:25.379 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:25.379 ==> default: -> value=-drive, 00:01:25.379 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:25.379 ==> default: -> value=-device, 00:01:25.379 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:25.379 ==> default: -> value=-drive, 00:01:25.379 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:25.379 ==> default: -> value=-device, 00:01:25.379 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:25.379 ==> default: -> value=-device, 00:01:25.379 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:25.379 ==> default: -> value=-device, 00:01:25.379 ==> default: -> value=nvme,id=nvme-3,serial=12343,subsys=fdp-subsys3, 00:01:25.379 ==> default: -> value=-drive, 00:01:25.379 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:25.379 ==> default: -> value=-device, 00:01:25.379 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:25.379 ==> default: Creating shared folders metadata... 00:01:25.379 ==> default: Starting domain. 00:01:26.757 ==> default: Waiting for domain to get an IP address... 00:01:44.850 ==> default: Waiting for SSH to become available... 00:01:44.850 ==> default: Configuring and enabling network interfaces... 00:01:49.061 default: SSH address: 192.168.121.207:22 00:01:49.061 default: SSH username: vagrant 00:01:49.061 default: SSH auth method: private key 00:01:50.967 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:59.090 ==> default: Mounting SSHFS shared folder... 00:02:00.029 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:02:00.029 ==> default: Checking Mount.. 00:02:01.419 ==> default: Folder Successfully Mounted! 00:02:01.419 ==> default: Running provisioner: file... 00:02:01.988 default: ~/.gitconfig => .gitconfig 00:02:02.555 00:02:02.555 SUCCESS! 00:02:02.555 00:02:02.555 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:02:02.555 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:02.555 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:02:02.555 00:02:02.565 [Pipeline] } 00:02:02.584 [Pipeline] // stage 00:02:02.593 [Pipeline] dir 00:02:02.593 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt 00:02:02.595 [Pipeline] { 00:02:02.610 [Pipeline] catchError 00:02:02.612 [Pipeline] { 00:02:02.625 [Pipeline] sh 00:02:02.901 + vagrant ssh-config --host vagrant 00:02:02.901 + sed -ne /^Host/,$p 00:02:02.901 + tee ssh_conf 00:02:06.188 Host vagrant 00:02:06.188 HostName 192.168.121.207 00:02:06.188 User vagrant 00:02:06.188 Port 22 00:02:06.188 UserKnownHostsFile /dev/null 00:02:06.188 StrictHostKeyChecking no 00:02:06.188 PasswordAuthentication no 00:02:06.188 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1705279005-2131/libvirt/fedora38 00:02:06.188 IdentitiesOnly yes 00:02:06.188 LogLevel FATAL 00:02:06.188 ForwardAgent yes 00:02:06.188 ForwardX11 yes 00:02:06.188 00:02:06.202 [Pipeline] withEnv 00:02:06.205 [Pipeline] { 00:02:06.221 [Pipeline] sh 00:02:06.499 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:06.499 source /etc/os-release 00:02:06.499 [[ -e /image.version ]] && img=$(< /image.version) 00:02:06.499 # Minimal, systemd-like check. 00:02:06.499 if [[ -e /.dockerenv ]]; then 00:02:06.499 # Clear garbage from the node's name: 00:02:06.500 # agt-er_autotest_547-896 -> autotest_547-896 00:02:06.500 # $HOSTNAME is the actual container id 00:02:06.500 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:06.500 if mountpoint -q /etc/hostname; then 00:02:06.500 # We can assume this is a mount from a host where container is running, 00:02:06.500 # so fetch its hostname to easily identify the target swarm worker. 00:02:06.500 container="$(< /etc/hostname) ($agent)" 00:02:06.500 else 00:02:06.500 # Fallback 00:02:06.500 container=$agent 00:02:06.500 fi 00:02:06.500 fi 00:02:06.500 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:06.500 00:02:06.511 [Pipeline] } 00:02:06.531 [Pipeline] // withEnv 00:02:06.539 [Pipeline] setCustomBuildProperty 00:02:06.554 [Pipeline] stage 00:02:06.556 [Pipeline] { (Tests) 00:02:06.576 [Pipeline] sh 00:02:06.855 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:07.129 [Pipeline] timeout 00:02:07.129 Timeout set to expire in 40 min 00:02:07.131 [Pipeline] { 00:02:07.146 [Pipeline] sh 00:02:07.426 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:07.994 HEAD is now at 36faa8c31 bdev/nvme: Fix the case that namespace was removed during reset 00:02:08.005 [Pipeline] sh 00:02:08.279 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:08.552 [Pipeline] sh 00:02:08.832 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:09.106 [Pipeline] sh 00:02:09.385 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:02:09.385 ++ readlink -f spdk_repo 00:02:09.644 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:09.644 + [[ -n /home/vagrant/spdk_repo ]] 00:02:09.644 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:09.644 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:09.644 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:09.644 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:09.644 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:09.644 + cd /home/vagrant/spdk_repo 00:02:09.644 + source /etc/os-release 00:02:09.644 ++ NAME='Fedora Linux' 00:02:09.644 ++ VERSION='38 (Cloud Edition)' 00:02:09.644 ++ ID=fedora 00:02:09.644 ++ VERSION_ID=38 00:02:09.644 ++ VERSION_CODENAME= 00:02:09.644 ++ PLATFORM_ID=platform:f38 00:02:09.644 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:09.644 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:09.644 ++ LOGO=fedora-logo-icon 00:02:09.644 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:09.644 ++ HOME_URL=https://fedoraproject.org/ 00:02:09.644 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:09.644 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:09.644 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:09.644 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:09.644 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:09.644 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:09.644 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:09.644 ++ SUPPORT_END=2024-05-14 00:02:09.644 ++ VARIANT='Cloud Edition' 00:02:09.644 ++ VARIANT_ID=cloud 00:02:09.644 + uname -a 00:02:09.644 Linux fedora38-cloud-1705279005-2131 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:09.644 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:09.644 Hugepages 00:02:09.644 node hugesize free / total 00:02:09.644 node0 1048576kB 0 / 0 00:02:09.644 node0 2048kB 0 / 0 00:02:09.644 00:02:09.644 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:09.644 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:09.903 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:09.903 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:09.903 NVMe 0000:00:08.0 1b36 0010 unknown nvme nvme3 nvme3n1 nvme3n2 nvme3n3 00:02:09.903 NVMe 0000:00:09.0 1b36 0010 unknown nvme nvme2 nvme2c2n1 00:02:09.903 + rm -f /tmp/spdk-ld-path 00:02:09.903 + source autorun-spdk.conf 00:02:09.903 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:09.903 ++ SPDK_TEST_NVME=1 00:02:09.903 ++ SPDK_TEST_FTL=1 00:02:09.903 ++ SPDK_TEST_ISAL=1 00:02:09.903 ++ SPDK_RUN_ASAN=1 00:02:09.903 ++ SPDK_RUN_UBSAN=1 00:02:09.903 ++ SPDK_TEST_XNVME=1 00:02:09.903 ++ SPDK_TEST_NVME_FDP=1 00:02:09.903 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:09.903 ++ RUN_NIGHTLY=1 00:02:09.903 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:09.903 + [[ -n '' ]] 00:02:09.903 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:09.903 + for M in /var/spdk/build-*-manifest.txt 00:02:09.903 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:09.903 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:09.903 + for M in /var/spdk/build-*-manifest.txt 00:02:09.903 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:09.903 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:09.903 ++ uname 00:02:09.903 + [[ Linux == \L\i\n\u\x ]] 00:02:09.903 + sudo dmesg -T 00:02:09.903 + sudo dmesg --clear 00:02:10.162 + dmesg_pid=5179 00:02:10.162 + [[ Fedora Linux == FreeBSD ]] 00:02:10.162 + sudo dmesg -Tw 00:02:10.162 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:10.162 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:10.162 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:10.162 + [[ -x /usr/src/fio-static/fio ]] 00:02:10.162 + export FIO_BIN=/usr/src/fio-static/fio 00:02:10.162 + FIO_BIN=/usr/src/fio-static/fio 00:02:10.162 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:10.162 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:10.162 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:10.162 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:10.162 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:10.162 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:10.162 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:10.162 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:10.162 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:10.162 Test configuration: 00:02:10.162 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.162 SPDK_TEST_NVME=1 00:02:10.162 SPDK_TEST_FTL=1 00:02:10.162 SPDK_TEST_ISAL=1 00:02:10.162 SPDK_RUN_ASAN=1 00:02:10.162 SPDK_RUN_UBSAN=1 00:02:10.162 SPDK_TEST_XNVME=1 00:02:10.162 SPDK_TEST_NVME_FDP=1 00:02:10.162 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:10.162 RUN_NIGHTLY=1 04:42:17 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:10.162 04:42:17 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:10.162 04:42:17 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:10.162 04:42:17 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:10.162 04:42:17 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.162 04:42:17 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.162 04:42:17 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.162 04:42:17 -- paths/export.sh@5 -- $ export PATH 00:02:10.163 04:42:17 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.163 04:42:17 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:10.163 04:42:17 -- common/autobuild_common.sh@435 -- $ date +%s 00:02:10.163 04:42:17 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1715488937.XXXXXX 00:02:10.163 04:42:17 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1715488937.d4Ebwc 00:02:10.163 04:42:17 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:02:10.163 04:42:17 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:02:10.163 04:42:17 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:10.163 04:42:17 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:10.163 04:42:17 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:10.163 04:42:17 -- common/autobuild_common.sh@451 -- $ get_config_params 00:02:10.163 04:42:17 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:02:10.163 04:42:17 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.163 04:42:17 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:10.163 04:42:17 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:10.163 04:42:17 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:10.163 04:42:17 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:10.163 04:42:17 -- spdk/autobuild.sh@16 -- $ date -u 00:02:10.163 Sun May 12 04:42:17 AM UTC 2024 00:02:10.163 04:42:17 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:10.163 LTS-24-g36faa8c31 00:02:10.163 04:42:17 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:10.163 04:42:17 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:10.163 04:42:17 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:10.163 04:42:17 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:10.163 04:42:17 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.163 ************************************ 00:02:10.163 START TEST asan 00:02:10.163 ************************************ 00:02:10.163 using asan 00:02:10.163 04:42:17 -- common/autotest_common.sh@1104 -- $ echo 'using asan' 00:02:10.163 00:02:10.163 real 0m0.000s 00:02:10.163 user 0m0.000s 00:02:10.163 sys 0m0.000s 00:02:10.163 04:42:17 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:10.163 04:42:17 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.163 ************************************ 00:02:10.163 END TEST asan 00:02:10.163 ************************************ 00:02:10.163 04:42:17 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:10.163 04:42:17 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:10.163 04:42:17 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:10.163 04:42:17 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:10.163 04:42:17 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.163 ************************************ 00:02:10.163 START TEST ubsan 00:02:10.163 ************************************ 00:02:10.163 using ubsan 00:02:10.163 04:42:17 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:02:10.163 00:02:10.163 real 0m0.000s 00:02:10.163 user 0m0.000s 00:02:10.163 sys 0m0.000s 00:02:10.163 04:42:17 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:10.163 04:42:17 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.163 ************************************ 00:02:10.163 END TEST ubsan 00:02:10.163 ************************************ 00:02:10.163 04:42:17 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:10.163 04:42:17 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:10.163 04:42:17 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:10.163 04:42:17 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:10.163 04:42:17 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:10.163 04:42:17 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:10.163 04:42:17 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:10.163 04:42:17 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:10.163 04:42:17 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:10.421 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:10.421 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:10.680 Using 'verbs' RDMA provider 00:02:26.498 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:02:38.711 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:38.711 Creating mk/config.mk...done. 00:02:38.711 Creating mk/cc.flags.mk...done. 00:02:38.711 Type 'make' to build. 00:02:38.711 04:42:44 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:38.711 04:42:44 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:38.711 04:42:44 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:38.711 04:42:44 -- common/autotest_common.sh@10 -- $ set +x 00:02:38.711 ************************************ 00:02:38.711 START TEST make 00:02:38.711 ************************************ 00:02:38.711 04:42:44 -- common/autotest_common.sh@1104 -- $ make -j10 00:02:38.711 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:38.711 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:38.711 meson setup builddir \ 00:02:38.711 -Dwith-libaio=enabled \ 00:02:38.711 -Dwith-liburing=enabled \ 00:02:38.711 -Dwith-libvfn=disabled \ 00:02:38.711 -Dwith-spdk=false && \ 00:02:38.711 meson compile -C builddir && \ 00:02:38.711 cd -) 00:02:38.711 make[1]: Nothing to be done for 'all'. 00:02:40.082 The Meson build system 00:02:40.082 Version: 1.3.1 00:02:40.082 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:40.082 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:40.082 Build type: native build 00:02:40.082 Project name: xnvme 00:02:40.082 Project version: 0.7.3 00:02:40.082 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:40.082 C linker for the host machine: cc ld.bfd 2.39-16 00:02:40.082 Host machine cpu family: x86_64 00:02:40.082 Host machine cpu: x86_64 00:02:40.082 Message: host_machine.system: linux 00:02:40.082 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:40.082 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:40.082 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:40.082 Run-time dependency threads found: YES 00:02:40.082 Has header "setupapi.h" : NO 00:02:40.082 Has header "linux/blkzoned.h" : YES 00:02:40.082 Has header "linux/blkzoned.h" : YES (cached) 00:02:40.082 Has header "libaio.h" : YES 00:02:40.082 Library aio found: YES 00:02:40.082 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:40.082 Run-time dependency liburing found: YES 2.2 00:02:40.082 Dependency libvfn skipped: feature with-libvfn disabled 00:02:40.082 Run-time dependency appleframeworks found: NO (tried framework) 00:02:40.082 Run-time dependency appleframeworks found: NO (tried framework) 00:02:40.082 Configuring xnvme_config.h using configuration 00:02:40.082 Configuring xnvme.spec using configuration 00:02:40.082 Run-time dependency bash-completion found: YES 2.11 00:02:40.082 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:40.082 Program cp found: YES (/usr/bin/cp) 00:02:40.082 Has header "winsock2.h" : NO 00:02:40.082 Has header "dbghelp.h" : NO 00:02:40.082 Library rpcrt4 found: NO 00:02:40.082 Library rt found: YES 00:02:40.082 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:40.082 Found CMake: /usr/bin/cmake (3.27.7) 00:02:40.082 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:02:40.082 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:02:40.082 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:02:40.082 Build targets in project: 32 00:02:40.082 00:02:40.082 xnvme 0.7.3 00:02:40.082 00:02:40.082 User defined options 00:02:40.082 with-libaio : enabled 00:02:40.082 with-liburing: enabled 00:02:40.082 with-libvfn : disabled 00:02:40.082 with-spdk : false 00:02:40.082 00:02:40.082 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:40.648 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:40.648 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:02:40.648 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:02:40.648 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:02:40.648 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:02:40.648 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:02:40.648 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:02:40.648 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:02:40.648 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:02:40.648 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:02:40.648 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:02:40.648 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:02:40.648 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:02:40.648 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:02:40.906 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:02:40.906 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:02:40.906 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:02:40.906 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:02:40.906 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:02:40.906 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:02:40.906 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:02:40.906 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:02:40.906 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:02:40.906 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:02:40.906 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:02:40.906 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:02:40.906 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:02:40.906 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:02:40.906 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:02:40.906 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:02:40.906 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:02:40.906 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:02:40.906 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:02:40.906 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:02:40.906 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:02:41.164 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:02:41.164 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:02:41.164 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:02:41.164 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:02:41.164 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:02:41.164 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:02:41.164 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:02:41.164 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:02:41.164 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:02:41.164 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:02:41.164 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:02:41.164 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:02:41.164 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:02:41.164 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:02:41.164 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:02:41.164 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:02:41.164 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:02:41.164 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:02:41.165 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:02:41.165 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:02:41.165 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:02:41.165 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:02:41.165 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:02:41.165 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:02:41.165 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:02:41.165 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:02:41.165 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:02:41.165 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:02:41.423 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:02:41.423 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:02:41.423 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:02:41.423 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:02:41.423 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:02:41.423 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:02:41.423 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:02:41.423 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:02:41.423 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:02:41.423 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:02:41.423 [73/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:02:41.423 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:02:41.681 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:02:41.681 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:02:41.681 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:02:41.681 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:02:41.681 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:02:41.681 [80/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:02:41.681 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:02:41.681 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:02:41.681 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:02:41.681 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:02:41.681 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:02:41.681 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:02:41.681 [87/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:02:41.681 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:02:41.681 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:02:41.681 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:02:41.681 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:02:41.939 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:02:41.939 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:02:41.939 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:02:41.939 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:02:41.939 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:02:41.939 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:02:41.939 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:02:41.940 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:02:41.940 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:02:41.940 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:02:41.940 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:02:41.940 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:02:41.940 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:02:41.940 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:02:41.940 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:02:41.940 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:02:41.940 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:02:41.940 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:02:41.940 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:02:41.940 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:02:41.940 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:02:41.940 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:02:41.940 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:02:41.940 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:02:41.940 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:02:41.940 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:02:41.940 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:02:41.940 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:02:41.940 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:02:41.940 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:02:41.940 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:02:42.198 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:02:42.198 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:02:42.198 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:02:42.198 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:02:42.198 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:02:42.198 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:02:42.198 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:02:42.198 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:02:42.198 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:02:42.198 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:02:42.198 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:02:42.198 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:02:42.198 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:02:42.198 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:02:42.198 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:02:42.198 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:02:42.456 [139/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:02:42.456 [140/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:02:42.456 [141/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:02:42.456 [142/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:02:42.456 [143/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:02:42.456 [144/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:02:42.456 [145/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:02:42.456 [146/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:02:42.456 [147/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:02:42.456 [148/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:02:42.456 [149/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:02:42.715 [150/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:02:42.715 [151/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:02:42.715 [152/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:02:42.715 [153/203] Linking target lib/libxnvme.so 00:02:42.715 [154/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:02:42.715 [155/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:02:42.715 [156/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:02:42.715 [157/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:02:42.715 [158/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:02:42.715 [159/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:02:42.715 [160/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:02:42.715 [161/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:02:42.715 [162/203] Compiling C object tools/xdd.p/xdd.c.o 00:02:42.973 [163/203] Compiling C object tools/lblk.p/lblk.c.o 00:02:42.973 [164/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:02:42.973 [165/203] Compiling C object tools/kvs.p/kvs.c.o 00:02:42.973 [166/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:02:42.973 [167/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:02:42.973 [168/203] Compiling C object tools/zoned.p/zoned.c.o 00:02:42.973 [169/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:02:42.973 [170/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:02:42.973 [171/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:02:42.973 [172/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:02:43.231 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:02:43.231 [174/203] Linking static target lib/libxnvme.a 00:02:43.231 [175/203] Linking target tests/xnvme_tests_buf 00:02:43.231 [176/203] Linking target tests/xnvme_tests_async_intf 00:02:43.231 [177/203] Linking target tests/xnvme_tests_lblk 00:02:43.231 [178/203] Linking target tests/xnvme_tests_xnvme_cli 00:02:43.231 [179/203] Linking target tests/xnvme_tests_xnvme_file 00:02:43.231 [180/203] Linking target tests/xnvme_tests_znd_state 00:02:43.231 [181/203] Linking target tests/xnvme_tests_ioworker 00:02:43.231 [182/203] Linking target tests/xnvme_tests_cli 00:02:43.231 [183/203] Linking target tests/xnvme_tests_enum 00:02:43.231 [184/203] Linking target tests/xnvme_tests_scc 00:02:43.231 [185/203] Linking target tests/xnvme_tests_znd_append 00:02:43.231 [186/203] Linking target tests/xnvme_tests_znd_zrwa 00:02:43.231 [187/203] Linking target tests/xnvme_tests_znd_explicit_open 00:02:43.231 [188/203] Linking target tests/xnvme_tests_kvs 00:02:43.231 [189/203] Linking target tools/lblk 00:02:43.231 [190/203] Linking target tests/xnvme_tests_map 00:02:43.231 [191/203] Linking target tools/xnvme 00:02:43.231 [192/203] Linking target examples/xnvme_dev 00:02:43.231 [193/203] Linking target tools/kvs 00:02:43.231 [194/203] Linking target tools/xdd 00:02:43.231 [195/203] Linking target tools/xnvme_file 00:02:43.231 [196/203] Linking target tools/zoned 00:02:43.231 [197/203] Linking target examples/xnvme_io_async 00:02:43.231 [198/203] Linking target examples/xnvme_enum 00:02:43.231 [199/203] Linking target examples/xnvme_hello 00:02:43.231 [200/203] Linking target examples/xnvme_single_sync 00:02:43.231 [201/203] Linking target examples/xnvme_single_async 00:02:43.231 [202/203] Linking target examples/zoned_io_sync 00:02:43.231 [203/203] Linking target examples/zoned_io_async 00:02:43.490 INFO: autodetecting backend as ninja 00:02:43.490 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:43.490 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:50.076 The Meson build system 00:02:50.076 Version: 1.3.1 00:02:50.076 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:50.076 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:50.076 Build type: native build 00:02:50.076 Program cat found: YES (/usr/bin/cat) 00:02:50.076 Project name: DPDK 00:02:50.076 Project version: 23.11.0 00:02:50.076 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:50.076 C linker for the host machine: cc ld.bfd 2.39-16 00:02:50.076 Host machine cpu family: x86_64 00:02:50.076 Host machine cpu: x86_64 00:02:50.076 Message: ## Building in Developer Mode ## 00:02:50.076 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:50.076 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:50.076 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:50.076 Program python3 found: YES (/usr/bin/python3) 00:02:50.076 Program cat found: YES (/usr/bin/cat) 00:02:50.076 Compiler for C supports arguments -march=native: YES 00:02:50.076 Checking for size of "void *" : 8 00:02:50.076 Checking for size of "void *" : 8 (cached) 00:02:50.076 Library m found: YES 00:02:50.076 Library numa found: YES 00:02:50.076 Has header "numaif.h" : YES 00:02:50.076 Library fdt found: NO 00:02:50.076 Library execinfo found: NO 00:02:50.076 Has header "execinfo.h" : YES 00:02:50.076 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:50.076 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:50.076 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:50.076 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:50.076 Run-time dependency openssl found: YES 3.0.9 00:02:50.076 Run-time dependency libpcap found: YES 1.10.4 00:02:50.077 Has header "pcap.h" with dependency libpcap: YES 00:02:50.077 Compiler for C supports arguments -Wcast-qual: YES 00:02:50.077 Compiler for C supports arguments -Wdeprecated: YES 00:02:50.077 Compiler for C supports arguments -Wformat: YES 00:02:50.077 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:50.077 Compiler for C supports arguments -Wformat-security: NO 00:02:50.077 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:50.077 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:50.077 Compiler for C supports arguments -Wnested-externs: YES 00:02:50.077 Compiler for C supports arguments -Wold-style-definition: YES 00:02:50.077 Compiler for C supports arguments -Wpointer-arith: YES 00:02:50.077 Compiler for C supports arguments -Wsign-compare: YES 00:02:50.077 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:50.077 Compiler for C supports arguments -Wundef: YES 00:02:50.077 Compiler for C supports arguments -Wwrite-strings: YES 00:02:50.077 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:50.077 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:50.077 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:50.077 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:50.077 Program objdump found: YES (/usr/bin/objdump) 00:02:50.077 Compiler for C supports arguments -mavx512f: YES 00:02:50.077 Checking if "AVX512 checking" compiles: YES 00:02:50.077 Fetching value of define "__SSE4_2__" : 1 00:02:50.077 Fetching value of define "__AES__" : 1 00:02:50.077 Fetching value of define "__AVX__" : 1 00:02:50.077 Fetching value of define "__AVX2__" : 1 00:02:50.077 Fetching value of define "__AVX512BW__" : (undefined) 00:02:50.077 Fetching value of define "__AVX512CD__" : (undefined) 00:02:50.077 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:50.077 Fetching value of define "__AVX512F__" : (undefined) 00:02:50.077 Fetching value of define "__AVX512VL__" : (undefined) 00:02:50.077 Fetching value of define "__PCLMUL__" : 1 00:02:50.077 Fetching value of define "__RDRND__" : 1 00:02:50.077 Fetching value of define "__RDSEED__" : 1 00:02:50.077 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:50.077 Fetching value of define "__znver1__" : (undefined) 00:02:50.077 Fetching value of define "__znver2__" : (undefined) 00:02:50.077 Fetching value of define "__znver3__" : (undefined) 00:02:50.077 Fetching value of define "__znver4__" : (undefined) 00:02:50.077 Library asan found: YES 00:02:50.077 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:50.077 Message: lib/log: Defining dependency "log" 00:02:50.077 Message: lib/kvargs: Defining dependency "kvargs" 00:02:50.077 Message: lib/telemetry: Defining dependency "telemetry" 00:02:50.077 Library rt found: YES 00:02:50.077 Checking for function "getentropy" : NO 00:02:50.077 Message: lib/eal: Defining dependency "eal" 00:02:50.077 Message: lib/ring: Defining dependency "ring" 00:02:50.077 Message: lib/rcu: Defining dependency "rcu" 00:02:50.077 Message: lib/mempool: Defining dependency "mempool" 00:02:50.077 Message: lib/mbuf: Defining dependency "mbuf" 00:02:50.077 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:50.077 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:50.077 Compiler for C supports arguments -mpclmul: YES 00:02:50.077 Compiler for C supports arguments -maes: YES 00:02:50.077 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:50.077 Compiler for C supports arguments -mavx512bw: YES 00:02:50.077 Compiler for C supports arguments -mavx512dq: YES 00:02:50.077 Compiler for C supports arguments -mavx512vl: YES 00:02:50.077 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:50.077 Compiler for C supports arguments -mavx2: YES 00:02:50.077 Compiler for C supports arguments -mavx: YES 00:02:50.077 Message: lib/net: Defining dependency "net" 00:02:50.077 Message: lib/meter: Defining dependency "meter" 00:02:50.077 Message: lib/ethdev: Defining dependency "ethdev" 00:02:50.077 Message: lib/pci: Defining dependency "pci" 00:02:50.077 Message: lib/cmdline: Defining dependency "cmdline" 00:02:50.077 Message: lib/hash: Defining dependency "hash" 00:02:50.077 Message: lib/timer: Defining dependency "timer" 00:02:50.077 Message: lib/compressdev: Defining dependency "compressdev" 00:02:50.077 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:50.077 Message: lib/dmadev: Defining dependency "dmadev" 00:02:50.077 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:50.077 Message: lib/power: Defining dependency "power" 00:02:50.077 Message: lib/reorder: Defining dependency "reorder" 00:02:50.077 Message: lib/security: Defining dependency "security" 00:02:50.077 Has header "linux/userfaultfd.h" : YES 00:02:50.077 Has header "linux/vduse.h" : YES 00:02:50.077 Message: lib/vhost: Defining dependency "vhost" 00:02:50.077 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:50.077 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:50.077 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:50.077 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:50.077 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:50.077 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:50.077 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:50.077 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:50.077 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:50.077 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:50.077 Program doxygen found: YES (/usr/bin/doxygen) 00:02:50.077 Configuring doxy-api-html.conf using configuration 00:02:50.077 Configuring doxy-api-man.conf using configuration 00:02:50.077 Program mandb found: YES (/usr/bin/mandb) 00:02:50.077 Program sphinx-build found: NO 00:02:50.077 Configuring rte_build_config.h using configuration 00:02:50.077 Message: 00:02:50.077 ================= 00:02:50.077 Applications Enabled 00:02:50.077 ================= 00:02:50.077 00:02:50.077 apps: 00:02:50.077 00:02:50.077 00:02:50.077 Message: 00:02:50.077 ================= 00:02:50.077 Libraries Enabled 00:02:50.077 ================= 00:02:50.077 00:02:50.077 libs: 00:02:50.077 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:50.077 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:50.077 cryptodev, dmadev, power, reorder, security, vhost, 00:02:50.077 00:02:50.077 Message: 00:02:50.077 =============== 00:02:50.077 Drivers Enabled 00:02:50.077 =============== 00:02:50.077 00:02:50.077 common: 00:02:50.077 00:02:50.077 bus: 00:02:50.077 pci, vdev, 00:02:50.077 mempool: 00:02:50.077 ring, 00:02:50.077 dma: 00:02:50.077 00:02:50.077 net: 00:02:50.077 00:02:50.077 crypto: 00:02:50.077 00:02:50.077 compress: 00:02:50.077 00:02:50.077 vdpa: 00:02:50.077 00:02:50.077 00:02:50.077 Message: 00:02:50.077 ================= 00:02:50.077 Content Skipped 00:02:50.077 ================= 00:02:50.077 00:02:50.077 apps: 00:02:50.077 dumpcap: explicitly disabled via build config 00:02:50.077 graph: explicitly disabled via build config 00:02:50.077 pdump: explicitly disabled via build config 00:02:50.077 proc-info: explicitly disabled via build config 00:02:50.077 test-acl: explicitly disabled via build config 00:02:50.077 test-bbdev: explicitly disabled via build config 00:02:50.077 test-cmdline: explicitly disabled via build config 00:02:50.077 test-compress-perf: explicitly disabled via build config 00:02:50.077 test-crypto-perf: explicitly disabled via build config 00:02:50.077 test-dma-perf: explicitly disabled via build config 00:02:50.077 test-eventdev: explicitly disabled via build config 00:02:50.077 test-fib: explicitly disabled via build config 00:02:50.077 test-flow-perf: explicitly disabled via build config 00:02:50.077 test-gpudev: explicitly disabled via build config 00:02:50.077 test-mldev: explicitly disabled via build config 00:02:50.077 test-pipeline: explicitly disabled via build config 00:02:50.077 test-pmd: explicitly disabled via build config 00:02:50.077 test-regex: explicitly disabled via build config 00:02:50.077 test-sad: explicitly disabled via build config 00:02:50.077 test-security-perf: explicitly disabled via build config 00:02:50.077 00:02:50.077 libs: 00:02:50.077 metrics: explicitly disabled via build config 00:02:50.077 acl: explicitly disabled via build config 00:02:50.077 bbdev: explicitly disabled via build config 00:02:50.077 bitratestats: explicitly disabled via build config 00:02:50.077 bpf: explicitly disabled via build config 00:02:50.077 cfgfile: explicitly disabled via build config 00:02:50.077 distributor: explicitly disabled via build config 00:02:50.077 efd: explicitly disabled via build config 00:02:50.077 eventdev: explicitly disabled via build config 00:02:50.077 dispatcher: explicitly disabled via build config 00:02:50.077 gpudev: explicitly disabled via build config 00:02:50.077 gro: explicitly disabled via build config 00:02:50.077 gso: explicitly disabled via build config 00:02:50.077 ip_frag: explicitly disabled via build config 00:02:50.077 jobstats: explicitly disabled via build config 00:02:50.077 latencystats: explicitly disabled via build config 00:02:50.077 lpm: explicitly disabled via build config 00:02:50.077 member: explicitly disabled via build config 00:02:50.077 pcapng: explicitly disabled via build config 00:02:50.077 rawdev: explicitly disabled via build config 00:02:50.078 regexdev: explicitly disabled via build config 00:02:50.078 mldev: explicitly disabled via build config 00:02:50.078 rib: explicitly disabled via build config 00:02:50.078 sched: explicitly disabled via build config 00:02:50.078 stack: explicitly disabled via build config 00:02:50.078 ipsec: explicitly disabled via build config 00:02:50.078 pdcp: explicitly disabled via build config 00:02:50.078 fib: explicitly disabled via build config 00:02:50.078 port: explicitly disabled via build config 00:02:50.078 pdump: explicitly disabled via build config 00:02:50.078 table: explicitly disabled via build config 00:02:50.078 pipeline: explicitly disabled via build config 00:02:50.078 graph: explicitly disabled via build config 00:02:50.078 node: explicitly disabled via build config 00:02:50.078 00:02:50.078 drivers: 00:02:50.078 common/cpt: not in enabled drivers build config 00:02:50.078 common/dpaax: not in enabled drivers build config 00:02:50.078 common/iavf: not in enabled drivers build config 00:02:50.078 common/idpf: not in enabled drivers build config 00:02:50.078 common/mvep: not in enabled drivers build config 00:02:50.078 common/octeontx: not in enabled drivers build config 00:02:50.078 bus/auxiliary: not in enabled drivers build config 00:02:50.078 bus/cdx: not in enabled drivers build config 00:02:50.078 bus/dpaa: not in enabled drivers build config 00:02:50.078 bus/fslmc: not in enabled drivers build config 00:02:50.078 bus/ifpga: not in enabled drivers build config 00:02:50.078 bus/platform: not in enabled drivers build config 00:02:50.078 bus/vmbus: not in enabled drivers build config 00:02:50.078 common/cnxk: not in enabled drivers build config 00:02:50.078 common/mlx5: not in enabled drivers build config 00:02:50.078 common/nfp: not in enabled drivers build config 00:02:50.078 common/qat: not in enabled drivers build config 00:02:50.078 common/sfc_efx: not in enabled drivers build config 00:02:50.078 mempool/bucket: not in enabled drivers build config 00:02:50.078 mempool/cnxk: not in enabled drivers build config 00:02:50.078 mempool/dpaa: not in enabled drivers build config 00:02:50.078 mempool/dpaa2: not in enabled drivers build config 00:02:50.078 mempool/octeontx: not in enabled drivers build config 00:02:50.078 mempool/stack: not in enabled drivers build config 00:02:50.078 dma/cnxk: not in enabled drivers build config 00:02:50.078 dma/dpaa: not in enabled drivers build config 00:02:50.078 dma/dpaa2: not in enabled drivers build config 00:02:50.078 dma/hisilicon: not in enabled drivers build config 00:02:50.078 dma/idxd: not in enabled drivers build config 00:02:50.078 dma/ioat: not in enabled drivers build config 00:02:50.078 dma/skeleton: not in enabled drivers build config 00:02:50.078 net/af_packet: not in enabled drivers build config 00:02:50.078 net/af_xdp: not in enabled drivers build config 00:02:50.078 net/ark: not in enabled drivers build config 00:02:50.078 net/atlantic: not in enabled drivers build config 00:02:50.078 net/avp: not in enabled drivers build config 00:02:50.078 net/axgbe: not in enabled drivers build config 00:02:50.078 net/bnx2x: not in enabled drivers build config 00:02:50.078 net/bnxt: not in enabled drivers build config 00:02:50.078 net/bonding: not in enabled drivers build config 00:02:50.078 net/cnxk: not in enabled drivers build config 00:02:50.078 net/cpfl: not in enabled drivers build config 00:02:50.078 net/cxgbe: not in enabled drivers build config 00:02:50.078 net/dpaa: not in enabled drivers build config 00:02:50.078 net/dpaa2: not in enabled drivers build config 00:02:50.078 net/e1000: not in enabled drivers build config 00:02:50.078 net/ena: not in enabled drivers build config 00:02:50.078 net/enetc: not in enabled drivers build config 00:02:50.078 net/enetfec: not in enabled drivers build config 00:02:50.078 net/enic: not in enabled drivers build config 00:02:50.078 net/failsafe: not in enabled drivers build config 00:02:50.078 net/fm10k: not in enabled drivers build config 00:02:50.078 net/gve: not in enabled drivers build config 00:02:50.078 net/hinic: not in enabled drivers build config 00:02:50.078 net/hns3: not in enabled drivers build config 00:02:50.078 net/i40e: not in enabled drivers build config 00:02:50.078 net/iavf: not in enabled drivers build config 00:02:50.078 net/ice: not in enabled drivers build config 00:02:50.078 net/idpf: not in enabled drivers build config 00:02:50.078 net/igc: not in enabled drivers build config 00:02:50.078 net/ionic: not in enabled drivers build config 00:02:50.078 net/ipn3ke: not in enabled drivers build config 00:02:50.078 net/ixgbe: not in enabled drivers build config 00:02:50.078 net/mana: not in enabled drivers build config 00:02:50.078 net/memif: not in enabled drivers build config 00:02:50.078 net/mlx4: not in enabled drivers build config 00:02:50.078 net/mlx5: not in enabled drivers build config 00:02:50.078 net/mvneta: not in enabled drivers build config 00:02:50.078 net/mvpp2: not in enabled drivers build config 00:02:50.078 net/netvsc: not in enabled drivers build config 00:02:50.078 net/nfb: not in enabled drivers build config 00:02:50.078 net/nfp: not in enabled drivers build config 00:02:50.078 net/ngbe: not in enabled drivers build config 00:02:50.078 net/null: not in enabled drivers build config 00:02:50.078 net/octeontx: not in enabled drivers build config 00:02:50.078 net/octeon_ep: not in enabled drivers build config 00:02:50.078 net/pcap: not in enabled drivers build config 00:02:50.078 net/pfe: not in enabled drivers build config 00:02:50.078 net/qede: not in enabled drivers build config 00:02:50.078 net/ring: not in enabled drivers build config 00:02:50.078 net/sfc: not in enabled drivers build config 00:02:50.078 net/softnic: not in enabled drivers build config 00:02:50.078 net/tap: not in enabled drivers build config 00:02:50.078 net/thunderx: not in enabled drivers build config 00:02:50.078 net/txgbe: not in enabled drivers build config 00:02:50.078 net/vdev_netvsc: not in enabled drivers build config 00:02:50.078 net/vhost: not in enabled drivers build config 00:02:50.078 net/virtio: not in enabled drivers build config 00:02:50.078 net/vmxnet3: not in enabled drivers build config 00:02:50.078 raw/*: missing internal dependency, "rawdev" 00:02:50.078 crypto/armv8: not in enabled drivers build config 00:02:50.078 crypto/bcmfs: not in enabled drivers build config 00:02:50.078 crypto/caam_jr: not in enabled drivers build config 00:02:50.078 crypto/ccp: not in enabled drivers build config 00:02:50.078 crypto/cnxk: not in enabled drivers build config 00:02:50.078 crypto/dpaa_sec: not in enabled drivers build config 00:02:50.078 crypto/dpaa2_sec: not in enabled drivers build config 00:02:50.078 crypto/ipsec_mb: not in enabled drivers build config 00:02:50.078 crypto/mlx5: not in enabled drivers build config 00:02:50.078 crypto/mvsam: not in enabled drivers build config 00:02:50.078 crypto/nitrox: not in enabled drivers build config 00:02:50.078 crypto/null: not in enabled drivers build config 00:02:50.078 crypto/octeontx: not in enabled drivers build config 00:02:50.078 crypto/openssl: not in enabled drivers build config 00:02:50.078 crypto/scheduler: not in enabled drivers build config 00:02:50.078 crypto/uadk: not in enabled drivers build config 00:02:50.078 crypto/virtio: not in enabled drivers build config 00:02:50.078 compress/isal: not in enabled drivers build config 00:02:50.078 compress/mlx5: not in enabled drivers build config 00:02:50.078 compress/octeontx: not in enabled drivers build config 00:02:50.078 compress/zlib: not in enabled drivers build config 00:02:50.078 regex/*: missing internal dependency, "regexdev" 00:02:50.078 ml/*: missing internal dependency, "mldev" 00:02:50.078 vdpa/ifc: not in enabled drivers build config 00:02:50.078 vdpa/mlx5: not in enabled drivers build config 00:02:50.078 vdpa/nfp: not in enabled drivers build config 00:02:50.078 vdpa/sfc: not in enabled drivers build config 00:02:50.078 event/*: missing internal dependency, "eventdev" 00:02:50.078 baseband/*: missing internal dependency, "bbdev" 00:02:50.078 gpu/*: missing internal dependency, "gpudev" 00:02:50.078 00:02:50.078 00:02:50.337 Build targets in project: 85 00:02:50.337 00:02:50.337 DPDK 23.11.0 00:02:50.337 00:02:50.337 User defined options 00:02:50.337 buildtype : debug 00:02:50.337 default_library : shared 00:02:50.337 libdir : lib 00:02:50.337 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:50.337 b_sanitize : address 00:02:50.337 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:02:50.337 c_link_args : 00:02:50.337 cpu_instruction_set: native 00:02:50.337 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:50.337 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:50.337 enable_docs : false 00:02:50.337 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:50.337 enable_kmods : false 00:02:50.337 tests : false 00:02:50.337 00:02:50.337 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:50.901 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:50.901 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:50.901 [2/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:50.901 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:50.901 [4/265] Linking static target lib/librte_kvargs.a 00:02:50.901 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:51.159 [6/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:51.159 [7/265] Linking static target lib/librte_log.a 00:02:51.159 [8/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:51.159 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:51.159 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:51.416 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.982 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:51.982 [13/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:51.982 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:51.982 [15/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:51.982 [16/265] Linking static target lib/librte_telemetry.a 00:02:51.982 [17/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:51.982 [18/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.982 [19/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:51.982 [20/265] Linking target lib/librte_log.so.24.0 00:02:52.239 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:52.239 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:52.239 [23/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:52.239 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:52.497 [25/265] Linking target lib/librte_kvargs.so.24.0 00:02:52.497 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:52.755 [27/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:52.755 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:52.755 [29/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.755 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:52.755 [31/265] Linking target lib/librte_telemetry.so.24.0 00:02:52.755 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:52.755 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:53.013 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:53.013 [35/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:53.271 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:53.271 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:53.529 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:53.529 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:53.529 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:53.529 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:53.529 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:53.529 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:53.529 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:53.787 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:53.787 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:53.787 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:54.045 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:54.045 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:54.045 [50/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:54.302 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:54.560 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:54.560 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:54.560 [54/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:54.560 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:54.560 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:54.819 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:54.819 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:54.819 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:54.819 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:54.819 [61/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:54.819 [62/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:55.077 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:55.077 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:55.336 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:55.336 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:55.594 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:55.594 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:55.594 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:55.852 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:55.852 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:55.852 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:55.852 [73/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:55.852 [74/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:55.852 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:55.852 [76/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:56.110 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:56.110 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:56.368 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:56.368 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:56.368 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:56.626 [82/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:56.884 [83/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:56.884 [84/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:56.884 [85/265] Linking static target lib/librte_ring.a 00:02:56.884 [86/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:57.142 [87/265] Linking static target lib/librte_eal.a 00:02:57.142 [88/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:57.142 [89/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:57.401 [90/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:57.401 [91/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:57.401 [92/265] Linking static target lib/librte_mempool.a 00:02:57.659 [93/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.659 [94/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:57.659 [95/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:57.659 [96/265] Linking static target lib/librte_rcu.a 00:02:57.659 [97/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:57.917 [98/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:57.917 [99/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:58.176 [100/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:58.176 [101/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.435 [102/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:58.435 [103/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:58.692 [104/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:58.693 [105/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:58.693 [106/265] Linking static target lib/librte_mbuf.a 00:02:58.693 [107/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:58.693 [108/265] Linking static target lib/librte_meter.a 00:02:58.693 [109/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:58.693 [110/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:58.693 [111/265] Linking static target lib/librte_net.a 00:02:58.693 [112/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.952 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:58.952 [114/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.214 [115/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.214 [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:59.214 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:59.479 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:59.738 [119/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.738 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:00.305 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:00.305 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:00.305 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:00.305 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:00.305 [125/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:00.305 [126/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:00.563 [127/265] Linking static target lib/librte_pci.a 00:03:00.563 [128/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:00.563 [129/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:00.563 [130/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:00.855 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:00.855 [132/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.855 [133/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:00.855 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:00.855 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:01.113 [136/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:01.113 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:01.113 [138/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:01.113 [139/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:01.113 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:01.113 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:01.113 [142/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:01.371 [143/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:01.371 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:01.371 [145/265] Linking static target lib/librte_cmdline.a 00:03:01.371 [146/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:01.938 [147/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:01.938 [148/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:01.938 [149/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:01.938 [150/265] Linking static target lib/librte_timer.a 00:03:01.938 [151/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:01.938 [152/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:02.196 [153/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:02.196 [154/265] Linking static target lib/librte_ethdev.a 00:03:02.454 [155/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:02.454 [156/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:02.454 [157/265] Linking static target lib/librte_compressdev.a 00:03:02.454 [158/265] Linking static target lib/librte_hash.a 00:03:02.713 [159/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.713 [160/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:02.713 [161/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:02.713 [162/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:02.713 [163/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:02.713 [164/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:03.280 [165/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.280 [166/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:03.280 [167/265] Linking static target lib/librte_dmadev.a 00:03:03.280 [168/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:03.280 [169/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:03.280 [170/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:03.538 [171/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.538 [172/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:03.796 [173/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.796 [174/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.796 [175/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:04.054 [176/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:04.054 [177/265] Linking static target lib/librte_cryptodev.a 00:03:04.054 [178/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:04.054 [179/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:04.054 [180/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:04.054 [181/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:04.312 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:04.312 [183/265] Linking static target lib/librte_power.a 00:03:04.570 [184/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:04.570 [185/265] Linking static target lib/librte_reorder.a 00:03:04.570 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:04.827 [187/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:04.827 [188/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:04.827 [189/265] Linking static target lib/librte_security.a 00:03:04.827 [190/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:05.085 [191/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.085 [192/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:05.347 [193/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.347 [194/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.615 [195/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:05.615 [196/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:05.872 [197/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:05.872 [198/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:05.872 [199/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:06.130 [200/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.130 [201/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:06.130 [202/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:06.130 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:06.387 [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:06.387 [205/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:06.387 [206/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:06.643 [207/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:06.643 [208/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:06.643 [209/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:06.643 [210/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:06.643 [211/265] Linking static target drivers/librte_bus_vdev.a 00:03:06.643 [212/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:06.643 [213/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:06.643 [214/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:06.643 [215/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:06.901 [216/265] Linking static target drivers/librte_bus_pci.a 00:03:06.901 [217/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:06.901 [218/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:06.901 [219/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.159 [220/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:07.159 [221/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:07.159 [222/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:07.159 [223/265] Linking static target drivers/librte_mempool_ring.a 00:03:07.159 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.092 [225/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.350 [226/265] Linking target lib/librte_eal.so.24.0 00:03:08.350 [227/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:08.350 [228/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:03:08.350 [229/265] Linking target lib/librte_timer.so.24.0 00:03:08.350 [230/265] Linking target lib/librte_ring.so.24.0 00:03:08.350 [231/265] Linking target lib/librte_meter.so.24.0 00:03:08.350 [232/265] Linking target lib/librte_pci.so.24.0 00:03:08.350 [233/265] Linking target lib/librte_dmadev.so.24.0 00:03:08.350 [234/265] Linking target drivers/librte_bus_vdev.so.24.0 00:03:08.608 [235/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:03:08.608 [236/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:03:08.608 [237/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:03:08.608 [238/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:03:08.608 [239/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:03:08.608 [240/265] Linking target lib/librte_rcu.so.24.0 00:03:08.608 [241/265] Linking target lib/librte_mempool.so.24.0 00:03:08.608 [242/265] Linking target drivers/librte_bus_pci.so.24.0 00:03:08.866 [243/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:03:08.867 [244/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:03:08.867 [245/265] Linking target lib/librte_mbuf.so.24.0 00:03:08.867 [246/265] Linking target drivers/librte_mempool_ring.so.24.0 00:03:08.867 [247/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:03:09.125 [248/265] Linking target lib/librte_cryptodev.so.24.0 00:03:09.125 [249/265] Linking target lib/librte_compressdev.so.24.0 00:03:09.125 [250/265] Linking target lib/librte_reorder.so.24.0 00:03:09.125 [251/265] Linking target lib/librte_net.so.24.0 00:03:09.125 [252/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:03:09.125 [253/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:03:09.125 [254/265] Linking target lib/librte_cmdline.so.24.0 00:03:09.125 [255/265] Linking target lib/librte_hash.so.24.0 00:03:09.125 [256/265] Linking target lib/librte_security.so.24.0 00:03:09.383 [257/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.383 [258/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:03:09.383 [259/265] Linking target lib/librte_ethdev.so.24.0 00:03:09.642 [260/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:03:09.642 [261/265] Linking target lib/librte_power.so.24.0 00:03:12.175 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:12.175 [263/265] Linking static target lib/librte_vhost.a 00:03:13.549 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.549 [265/265] Linking target lib/librte_vhost.so.24.0 00:03:13.549 INFO: autodetecting backend as ninja 00:03:13.549 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:14.923 CC lib/log/log.o 00:03:14.923 CC lib/log/log_flags.o 00:03:14.923 CC lib/log/log_deprecated.o 00:03:14.924 CC lib/ut_mock/mock.o 00:03:14.924 CC lib/ut/ut.o 00:03:14.924 LIB libspdk_ut_mock.a 00:03:14.924 SO libspdk_ut_mock.so.5.0 00:03:14.924 LIB libspdk_log.a 00:03:14.924 LIB libspdk_ut.a 00:03:14.924 SO libspdk_log.so.6.1 00:03:14.924 SO libspdk_ut.so.1.0 00:03:14.924 SYMLINK libspdk_ut_mock.so 00:03:14.924 SYMLINK libspdk_ut.so 00:03:14.924 SYMLINK libspdk_log.so 00:03:15.182 CC lib/util/base64.o 00:03:15.182 CC lib/util/bit_array.o 00:03:15.182 CC lib/dma/dma.o 00:03:15.182 CC lib/util/cpuset.o 00:03:15.182 CC lib/util/crc16.o 00:03:15.182 CC lib/util/crc32.o 00:03:15.182 CC lib/util/crc32c.o 00:03:15.182 CC lib/ioat/ioat.o 00:03:15.182 CXX lib/trace_parser/trace.o 00:03:15.182 CC lib/vfio_user/host/vfio_user_pci.o 00:03:15.439 LIB libspdk_dma.a 00:03:15.439 CC lib/util/crc32_ieee.o 00:03:15.439 CC lib/util/crc64.o 00:03:15.439 CC lib/util/dif.o 00:03:15.439 CC lib/vfio_user/host/vfio_user.o 00:03:15.439 SO libspdk_dma.so.3.0 00:03:15.439 CC lib/util/fd.o 00:03:15.439 CC lib/util/file.o 00:03:15.439 SYMLINK libspdk_dma.so 00:03:15.439 CC lib/util/hexlify.o 00:03:15.439 CC lib/util/iov.o 00:03:15.439 CC lib/util/math.o 00:03:15.439 CC lib/util/pipe.o 00:03:15.439 LIB libspdk_ioat.a 00:03:15.439 SO libspdk_ioat.so.6.0 00:03:15.696 CC lib/util/strerror_tls.o 00:03:15.696 CC lib/util/string.o 00:03:15.696 LIB libspdk_vfio_user.a 00:03:15.696 SYMLINK libspdk_ioat.so 00:03:15.696 CC lib/util/uuid.o 00:03:15.696 SO libspdk_vfio_user.so.4.0 00:03:15.696 CC lib/util/fd_group.o 00:03:15.696 CC lib/util/xor.o 00:03:15.696 SYMLINK libspdk_vfio_user.so 00:03:15.696 CC lib/util/zipf.o 00:03:16.261 LIB libspdk_util.a 00:03:16.261 SO libspdk_util.so.8.0 00:03:16.261 LIB libspdk_trace_parser.a 00:03:16.518 SO libspdk_trace_parser.so.4.0 00:03:16.518 SYMLINK libspdk_util.so 00:03:16.518 CC lib/idxd/idxd.o 00:03:16.518 CC lib/idxd/idxd_user.o 00:03:16.518 SYMLINK libspdk_trace_parser.so 00:03:16.518 CC lib/env_dpdk/env.o 00:03:16.518 CC lib/env_dpdk/memory.o 00:03:16.518 CC lib/conf/conf.o 00:03:16.518 CC lib/env_dpdk/pci.o 00:03:16.518 CC lib/vmd/vmd.o 00:03:16.518 CC lib/env_dpdk/init.o 00:03:16.518 CC lib/json/json_parse.o 00:03:16.518 CC lib/rdma/common.o 00:03:16.776 LIB libspdk_conf.a 00:03:16.776 SO libspdk_conf.so.5.0 00:03:16.776 CC lib/json/json_util.o 00:03:16.776 CC lib/vmd/led.o 00:03:17.033 SYMLINK libspdk_conf.so 00:03:17.033 CC lib/rdma/rdma_verbs.o 00:03:17.033 CC lib/json/json_write.o 00:03:17.033 CC lib/env_dpdk/threads.o 00:03:17.033 CC lib/env_dpdk/pci_ioat.o 00:03:17.033 CC lib/env_dpdk/pci_virtio.o 00:03:17.033 CC lib/env_dpdk/pci_vmd.o 00:03:17.033 CC lib/env_dpdk/pci_idxd.o 00:03:17.033 LIB libspdk_rdma.a 00:03:17.291 SO libspdk_rdma.so.5.0 00:03:17.291 CC lib/env_dpdk/pci_event.o 00:03:17.291 CC lib/env_dpdk/sigbus_handler.o 00:03:17.291 SYMLINK libspdk_rdma.so 00:03:17.291 CC lib/env_dpdk/pci_dpdk.o 00:03:17.291 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:17.291 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:17.291 LIB libspdk_json.a 00:03:17.291 LIB libspdk_idxd.a 00:03:17.291 SO libspdk_json.so.5.1 00:03:17.291 SO libspdk_idxd.so.11.0 00:03:17.291 SYMLINK libspdk_json.so 00:03:17.291 SYMLINK libspdk_idxd.so 00:03:17.549 LIB libspdk_vmd.a 00:03:17.549 SO libspdk_vmd.so.5.0 00:03:17.549 CC lib/jsonrpc/jsonrpc_server.o 00:03:17.549 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:17.549 CC lib/jsonrpc/jsonrpc_client.o 00:03:17.549 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:17.549 SYMLINK libspdk_vmd.so 00:03:17.808 LIB libspdk_jsonrpc.a 00:03:17.808 SO libspdk_jsonrpc.so.5.1 00:03:18.066 SYMLINK libspdk_jsonrpc.so 00:03:18.066 CC lib/rpc/rpc.o 00:03:18.324 LIB libspdk_rpc.a 00:03:18.324 SO libspdk_rpc.so.5.0 00:03:18.583 SYMLINK libspdk_rpc.so 00:03:18.583 LIB libspdk_env_dpdk.a 00:03:18.583 SO libspdk_env_dpdk.so.13.0 00:03:18.583 CC lib/trace/trace.o 00:03:18.583 CC lib/trace/trace_flags.o 00:03:18.583 CC lib/trace/trace_rpc.o 00:03:18.583 CC lib/sock/sock.o 00:03:18.583 CC lib/sock/sock_rpc.o 00:03:18.583 CC lib/notify/notify.o 00:03:18.583 CC lib/notify/notify_rpc.o 00:03:18.841 SYMLINK libspdk_env_dpdk.so 00:03:18.841 LIB libspdk_notify.a 00:03:18.841 SO libspdk_notify.so.5.0 00:03:18.841 LIB libspdk_trace.a 00:03:18.841 SYMLINK libspdk_notify.so 00:03:18.841 SO libspdk_trace.so.9.0 00:03:19.100 SYMLINK libspdk_trace.so 00:03:19.100 LIB libspdk_sock.a 00:03:19.100 SO libspdk_sock.so.8.0 00:03:19.100 SYMLINK libspdk_sock.so 00:03:19.100 CC lib/thread/thread.o 00:03:19.100 CC lib/thread/iobuf.o 00:03:19.359 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:19.359 CC lib/nvme/nvme_ctrlr.o 00:03:19.359 CC lib/nvme/nvme_fabric.o 00:03:19.359 CC lib/nvme/nvme_ns_cmd.o 00:03:19.359 CC lib/nvme/nvme_pcie_common.o 00:03:19.359 CC lib/nvme/nvme_pcie.o 00:03:19.359 CC lib/nvme/nvme_ns.o 00:03:19.359 CC lib/nvme/nvme_qpair.o 00:03:19.617 CC lib/nvme/nvme.o 00:03:20.184 CC lib/nvme/nvme_quirks.o 00:03:20.184 CC lib/nvme/nvme_transport.o 00:03:20.184 CC lib/nvme/nvme_discovery.o 00:03:20.442 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:20.442 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:20.442 CC lib/nvme/nvme_tcp.o 00:03:20.442 CC lib/nvme/nvme_opal.o 00:03:20.700 CC lib/nvme/nvme_io_msg.o 00:03:20.700 CC lib/nvme/nvme_poll_group.o 00:03:20.959 CC lib/nvme/nvme_zns.o 00:03:20.959 CC lib/nvme/nvme_cuse.o 00:03:20.959 CC lib/nvme/nvme_vfio_user.o 00:03:20.959 CC lib/nvme/nvme_rdma.o 00:03:21.217 LIB libspdk_thread.a 00:03:21.217 SO libspdk_thread.so.9.0 00:03:21.217 SYMLINK libspdk_thread.so 00:03:21.475 CC lib/blob/blobstore.o 00:03:21.475 CC lib/init/json_config.o 00:03:21.475 CC lib/accel/accel.o 00:03:21.475 CC lib/virtio/virtio.o 00:03:21.475 CC lib/init/subsystem.o 00:03:21.734 CC lib/init/subsystem_rpc.o 00:03:21.734 CC lib/init/rpc.o 00:03:21.734 CC lib/blob/request.o 00:03:21.734 CC lib/blob/zeroes.o 00:03:21.734 LIB libspdk_init.a 00:03:21.734 CC lib/virtio/virtio_vhost_user.o 00:03:21.992 CC lib/accel/accel_rpc.o 00:03:21.992 SO libspdk_init.so.4.0 00:03:21.992 SYMLINK libspdk_init.so 00:03:21.992 CC lib/accel/accel_sw.o 00:03:21.992 CC lib/blob/blob_bs_dev.o 00:03:22.250 CC lib/virtio/virtio_vfio_user.o 00:03:22.250 CC lib/virtio/virtio_pci.o 00:03:22.250 CC lib/event/app.o 00:03:22.250 CC lib/event/reactor.o 00:03:22.250 CC lib/event/log_rpc.o 00:03:22.250 CC lib/event/app_rpc.o 00:03:22.250 CC lib/event/scheduler_static.o 00:03:22.509 LIB libspdk_virtio.a 00:03:22.509 SO libspdk_virtio.so.6.0 00:03:22.767 SYMLINK libspdk_virtio.so 00:03:22.767 LIB libspdk_nvme.a 00:03:22.767 LIB libspdk_accel.a 00:03:22.767 LIB libspdk_event.a 00:03:22.767 SO libspdk_accel.so.14.0 00:03:22.767 SO libspdk_event.so.12.0 00:03:23.025 SO libspdk_nvme.so.12.0 00:03:23.025 SYMLINK libspdk_accel.so 00:03:23.025 SYMLINK libspdk_event.so 00:03:23.025 CC lib/bdev/bdev.o 00:03:23.025 CC lib/bdev/bdev_rpc.o 00:03:23.025 CC lib/bdev/bdev_zone.o 00:03:23.025 CC lib/bdev/part.o 00:03:23.025 CC lib/bdev/scsi_nvme.o 00:03:23.283 SYMLINK libspdk_nvme.so 00:03:25.815 LIB libspdk_blob.a 00:03:25.815 SO libspdk_blob.so.10.1 00:03:25.815 SYMLINK libspdk_blob.so 00:03:25.815 CC lib/lvol/lvol.o 00:03:25.815 CC lib/blobfs/blobfs.o 00:03:25.815 CC lib/blobfs/tree.o 00:03:26.751 LIB libspdk_bdev.a 00:03:26.751 SO libspdk_bdev.so.14.0 00:03:27.009 SYMLINK libspdk_bdev.so 00:03:27.009 LIB libspdk_blobfs.a 00:03:27.009 CC lib/scsi/dev.o 00:03:27.009 CC lib/scsi/lun.o 00:03:27.009 CC lib/scsi/scsi.o 00:03:27.009 CC lib/scsi/port.o 00:03:27.009 CC lib/nvmf/ctrlr.o 00:03:27.009 CC lib/ftl/ftl_core.o 00:03:27.009 CC lib/nbd/nbd.o 00:03:27.009 CC lib/ublk/ublk.o 00:03:27.009 SO libspdk_blobfs.so.9.0 00:03:27.267 LIB libspdk_lvol.a 00:03:27.267 SYMLINK libspdk_blobfs.so 00:03:27.267 CC lib/ublk/ublk_rpc.o 00:03:27.267 SO libspdk_lvol.so.9.1 00:03:27.267 CC lib/scsi/scsi_bdev.o 00:03:27.267 CC lib/scsi/scsi_pr.o 00:03:27.267 SYMLINK libspdk_lvol.so 00:03:27.267 CC lib/scsi/scsi_rpc.o 00:03:27.267 CC lib/scsi/task.o 00:03:27.525 CC lib/nbd/nbd_rpc.o 00:03:27.525 CC lib/ftl/ftl_init.o 00:03:27.525 CC lib/nvmf/ctrlr_discovery.o 00:03:27.525 CC lib/ftl/ftl_layout.o 00:03:27.525 CC lib/ftl/ftl_debug.o 00:03:27.525 CC lib/ftl/ftl_io.o 00:03:27.525 LIB libspdk_nbd.a 00:03:27.525 SO libspdk_nbd.so.6.0 00:03:27.783 CC lib/nvmf/ctrlr_bdev.o 00:03:27.783 CC lib/nvmf/subsystem.o 00:03:27.783 SYMLINK libspdk_nbd.so 00:03:27.783 CC lib/nvmf/nvmf.o 00:03:27.783 CC lib/ftl/ftl_sb.o 00:03:27.783 CC lib/ftl/ftl_l2p.o 00:03:27.783 LIB libspdk_scsi.a 00:03:27.783 CC lib/ftl/ftl_l2p_flat.o 00:03:28.041 SO libspdk_scsi.so.8.0 00:03:28.041 LIB libspdk_ublk.a 00:03:28.041 SO libspdk_ublk.so.2.0 00:03:28.041 CC lib/ftl/ftl_nv_cache.o 00:03:28.041 SYMLINK libspdk_scsi.so 00:03:28.041 CC lib/ftl/ftl_band.o 00:03:28.041 CC lib/ftl/ftl_band_ops.o 00:03:28.041 CC lib/ftl/ftl_writer.o 00:03:28.041 SYMLINK libspdk_ublk.so 00:03:28.299 CC lib/iscsi/conn.o 00:03:28.299 CC lib/vhost/vhost.o 00:03:28.299 CC lib/ftl/ftl_rq.o 00:03:28.557 CC lib/ftl/ftl_reloc.o 00:03:28.557 CC lib/ftl/ftl_l2p_cache.o 00:03:28.557 CC lib/ftl/ftl_p2l.o 00:03:28.557 CC lib/vhost/vhost_rpc.o 00:03:28.816 CC lib/nvmf/nvmf_rpc.o 00:03:28.816 CC lib/iscsi/init_grp.o 00:03:28.816 CC lib/iscsi/iscsi.o 00:03:29.076 CC lib/iscsi/md5.o 00:03:29.076 CC lib/iscsi/param.o 00:03:29.076 CC lib/iscsi/portal_grp.o 00:03:29.076 CC lib/ftl/mngt/ftl_mngt.o 00:03:29.076 CC lib/iscsi/tgt_node.o 00:03:29.334 CC lib/iscsi/iscsi_subsystem.o 00:03:29.334 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:29.334 CC lib/vhost/vhost_scsi.o 00:03:29.334 CC lib/iscsi/iscsi_rpc.o 00:03:29.334 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:29.334 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:29.593 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:29.593 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:29.593 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:29.593 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:29.593 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:29.852 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:29.852 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:29.852 CC lib/nvmf/transport.o 00:03:29.852 CC lib/vhost/vhost_blk.o 00:03:29.852 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:29.852 CC lib/nvmf/tcp.o 00:03:29.852 CC lib/vhost/rte_vhost_user.o 00:03:29.852 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:29.852 CC lib/ftl/utils/ftl_conf.o 00:03:30.112 CC lib/ftl/utils/ftl_md.o 00:03:30.112 CC lib/iscsi/task.o 00:03:30.112 CC lib/nvmf/rdma.o 00:03:30.112 CC lib/ftl/utils/ftl_mempool.o 00:03:30.371 CC lib/ftl/utils/ftl_bitmap.o 00:03:30.371 CC lib/ftl/utils/ftl_property.o 00:03:30.371 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:30.371 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:30.629 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:30.629 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:30.629 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:30.629 LIB libspdk_iscsi.a 00:03:30.629 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:30.887 SO libspdk_iscsi.so.7.0 00:03:30.887 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:30.887 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:30.887 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:30.887 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:30.887 CC lib/ftl/base/ftl_base_dev.o 00:03:30.887 SYMLINK libspdk_iscsi.so 00:03:30.887 CC lib/ftl/base/ftl_base_bdev.o 00:03:30.887 CC lib/ftl/ftl_trace.o 00:03:31.146 LIB libspdk_vhost.a 00:03:31.146 SO libspdk_vhost.so.7.1 00:03:31.405 SYMLINK libspdk_vhost.so 00:03:31.405 LIB libspdk_ftl.a 00:03:31.664 SO libspdk_ftl.so.8.0 00:03:31.923 SYMLINK libspdk_ftl.so 00:03:32.858 LIB libspdk_nvmf.a 00:03:33.116 SO libspdk_nvmf.so.17.0 00:03:33.374 SYMLINK libspdk_nvmf.so 00:03:33.374 CC module/env_dpdk/env_dpdk_rpc.o 00:03:33.633 CC module/accel/dsa/accel_dsa.o 00:03:33.633 CC module/blob/bdev/blob_bdev.o 00:03:33.633 CC module/accel/ioat/accel_ioat.o 00:03:33.633 CC module/accel/error/accel_error.o 00:03:33.633 CC module/accel/iaa/accel_iaa.o 00:03:33.633 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:33.633 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:33.633 CC module/sock/posix/posix.o 00:03:33.633 CC module/scheduler/gscheduler/gscheduler.o 00:03:33.633 LIB libspdk_env_dpdk_rpc.a 00:03:33.633 SO libspdk_env_dpdk_rpc.so.5.0 00:03:33.633 LIB libspdk_scheduler_dpdk_governor.a 00:03:33.633 LIB libspdk_scheduler_gscheduler.a 00:03:33.633 SYMLINK libspdk_env_dpdk_rpc.so 00:03:33.633 CC module/accel/dsa/accel_dsa_rpc.o 00:03:33.633 SO libspdk_scheduler_dpdk_governor.so.3.0 00:03:33.633 SO libspdk_scheduler_gscheduler.so.3.0 00:03:33.633 CC module/accel/error/accel_error_rpc.o 00:03:33.892 CC module/accel/ioat/accel_ioat_rpc.o 00:03:33.892 LIB libspdk_scheduler_dynamic.a 00:03:33.892 CC module/accel/iaa/accel_iaa_rpc.o 00:03:33.892 SO libspdk_scheduler_dynamic.so.3.0 00:03:33.892 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:33.892 SYMLINK libspdk_scheduler_gscheduler.so 00:03:33.892 SYMLINK libspdk_scheduler_dynamic.so 00:03:33.892 LIB libspdk_blob_bdev.a 00:03:33.892 LIB libspdk_accel_dsa.a 00:03:33.892 SO libspdk_blob_bdev.so.10.1 00:03:33.892 LIB libspdk_accel_ioat.a 00:03:33.892 SO libspdk_accel_dsa.so.4.0 00:03:33.892 LIB libspdk_accel_error.a 00:03:33.892 LIB libspdk_accel_iaa.a 00:03:33.892 SO libspdk_accel_ioat.so.5.0 00:03:33.892 SYMLINK libspdk_blob_bdev.so 00:03:33.892 SO libspdk_accel_error.so.1.0 00:03:33.892 SYMLINK libspdk_accel_dsa.so 00:03:33.892 SO libspdk_accel_iaa.so.2.0 00:03:34.150 SYMLINK libspdk_accel_ioat.so 00:03:34.150 SYMLINK libspdk_accel_error.so 00:03:34.150 SYMLINK libspdk_accel_iaa.so 00:03:34.150 CC module/blobfs/bdev/blobfs_bdev.o 00:03:34.150 CC module/bdev/malloc/bdev_malloc.o 00:03:34.150 CC module/bdev/delay/vbdev_delay.o 00:03:34.150 CC module/bdev/lvol/vbdev_lvol.o 00:03:34.150 CC module/bdev/gpt/gpt.o 00:03:34.150 CC module/bdev/nvme/bdev_nvme.o 00:03:34.150 CC module/bdev/error/vbdev_error.o 00:03:34.150 CC module/bdev/null/bdev_null.o 00:03:34.150 CC module/bdev/passthru/vbdev_passthru.o 00:03:34.409 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:34.409 CC module/bdev/gpt/vbdev_gpt.o 00:03:34.409 CC module/bdev/null/bdev_null_rpc.o 00:03:34.409 CC module/bdev/error/vbdev_error_rpc.o 00:03:34.409 LIB libspdk_sock_posix.a 00:03:34.409 SO libspdk_sock_posix.so.5.0 00:03:34.409 LIB libspdk_blobfs_bdev.a 00:03:34.409 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:34.668 SO libspdk_blobfs_bdev.so.5.0 00:03:34.668 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:34.668 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:34.668 SYMLINK libspdk_sock_posix.so 00:03:34.668 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:34.668 SYMLINK libspdk_blobfs_bdev.so 00:03:34.668 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:34.668 LIB libspdk_bdev_error.a 00:03:34.668 LIB libspdk_bdev_null.a 00:03:34.668 LIB libspdk_bdev_gpt.a 00:03:34.668 SO libspdk_bdev_error.so.5.0 00:03:34.668 SO libspdk_bdev_null.so.5.0 00:03:34.668 LIB libspdk_bdev_passthru.a 00:03:34.668 SO libspdk_bdev_gpt.so.5.0 00:03:34.668 SO libspdk_bdev_passthru.so.5.0 00:03:34.668 SYMLINK libspdk_bdev_error.so 00:03:34.668 LIB libspdk_bdev_malloc.a 00:03:34.668 SYMLINK libspdk_bdev_null.so 00:03:34.668 SYMLINK libspdk_bdev_gpt.so 00:03:34.668 LIB libspdk_bdev_delay.a 00:03:34.927 SYMLINK libspdk_bdev_passthru.so 00:03:34.927 SO libspdk_bdev_malloc.so.5.0 00:03:34.927 SO libspdk_bdev_delay.so.5.0 00:03:34.927 CC module/bdev/raid/bdev_raid.o 00:03:34.927 CC module/bdev/split/vbdev_split.o 00:03:34.927 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:34.927 CC module/bdev/xnvme/bdev_xnvme.o 00:03:34.927 SYMLINK libspdk_bdev_malloc.so 00:03:34.927 CC module/bdev/split/vbdev_split_rpc.o 00:03:34.927 CC module/bdev/aio/bdev_aio.o 00:03:34.927 SYMLINK libspdk_bdev_delay.so 00:03:34.927 CC module/bdev/raid/bdev_raid_rpc.o 00:03:34.927 LIB libspdk_bdev_lvol.a 00:03:34.927 SO libspdk_bdev_lvol.so.5.0 00:03:35.186 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:35.186 SYMLINK libspdk_bdev_lvol.so 00:03:35.186 LIB libspdk_bdev_split.a 00:03:35.186 SO libspdk_bdev_split.so.5.0 00:03:35.186 CC module/bdev/ftl/bdev_ftl.o 00:03:35.186 SYMLINK libspdk_bdev_split.so 00:03:35.186 CC module/bdev/raid/bdev_raid_sb.o 00:03:35.186 LIB libspdk_bdev_xnvme.a 00:03:35.186 CC module/bdev/iscsi/bdev_iscsi.o 00:03:35.186 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:35.186 CC module/bdev/aio/bdev_aio_rpc.o 00:03:35.186 SO libspdk_bdev_xnvme.so.2.0 00:03:35.186 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:35.444 SYMLINK libspdk_bdev_xnvme.so 00:03:35.444 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:35.444 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:35.444 LIB libspdk_bdev_zone_block.a 00:03:35.444 LIB libspdk_bdev_aio.a 00:03:35.444 SO libspdk_bdev_zone_block.so.5.0 00:03:35.444 SO libspdk_bdev_aio.so.5.0 00:03:35.444 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:35.444 CC module/bdev/raid/raid0.o 00:03:35.444 SYMLINK libspdk_bdev_zone_block.so 00:03:35.444 CC module/bdev/raid/raid1.o 00:03:35.701 SYMLINK libspdk_bdev_aio.so 00:03:35.701 CC module/bdev/nvme/nvme_rpc.o 00:03:35.701 CC module/bdev/nvme/bdev_mdns_client.o 00:03:35.701 CC module/bdev/nvme/vbdev_opal.o 00:03:35.701 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:35.701 LIB libspdk_bdev_ftl.a 00:03:35.701 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:35.701 SO libspdk_bdev_ftl.so.5.0 00:03:35.959 CC module/bdev/raid/concat.o 00:03:35.959 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:35.959 SYMLINK libspdk_bdev_ftl.so 00:03:35.959 LIB libspdk_bdev_iscsi.a 00:03:35.959 SO libspdk_bdev_iscsi.so.5.0 00:03:35.959 LIB libspdk_bdev_virtio.a 00:03:35.959 SO libspdk_bdev_virtio.so.5.0 00:03:35.959 SYMLINK libspdk_bdev_iscsi.so 00:03:35.959 SYMLINK libspdk_bdev_virtio.so 00:03:36.217 LIB libspdk_bdev_raid.a 00:03:36.217 SO libspdk_bdev_raid.so.5.0 00:03:36.217 SYMLINK libspdk_bdev_raid.so 00:03:36.784 LIB libspdk_bdev_nvme.a 00:03:37.042 SO libspdk_bdev_nvme.so.6.0 00:03:37.042 SYMLINK libspdk_bdev_nvme.so 00:03:37.300 CC module/event/subsystems/iobuf/iobuf.o 00:03:37.300 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:37.300 CC module/event/subsystems/vmd/vmd.o 00:03:37.300 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:37.300 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:37.300 CC module/event/subsystems/sock/sock.o 00:03:37.300 CC module/event/subsystems/scheduler/scheduler.o 00:03:37.562 LIB libspdk_event_sock.a 00:03:37.562 LIB libspdk_event_vhost_blk.a 00:03:37.562 LIB libspdk_event_vmd.a 00:03:37.562 LIB libspdk_event_scheduler.a 00:03:37.562 SO libspdk_event_sock.so.4.0 00:03:37.562 LIB libspdk_event_iobuf.a 00:03:37.562 SO libspdk_event_vhost_blk.so.2.0 00:03:37.562 SO libspdk_event_scheduler.so.3.0 00:03:37.562 SO libspdk_event_vmd.so.5.0 00:03:37.562 SO libspdk_event_iobuf.so.2.0 00:03:37.562 SYMLINK libspdk_event_sock.so 00:03:37.562 SYMLINK libspdk_event_vhost_blk.so 00:03:37.562 SYMLINK libspdk_event_scheduler.so 00:03:37.562 SYMLINK libspdk_event_vmd.so 00:03:37.825 SYMLINK libspdk_event_iobuf.so 00:03:37.825 CC module/event/subsystems/accel/accel.o 00:03:38.083 LIB libspdk_event_accel.a 00:03:38.083 SO libspdk_event_accel.so.5.0 00:03:38.083 SYMLINK libspdk_event_accel.so 00:03:38.342 CC module/event/subsystems/bdev/bdev.o 00:03:38.599 LIB libspdk_event_bdev.a 00:03:38.599 SO libspdk_event_bdev.so.5.0 00:03:38.599 SYMLINK libspdk_event_bdev.so 00:03:38.857 CC module/event/subsystems/scsi/scsi.o 00:03:38.857 CC module/event/subsystems/nbd/nbd.o 00:03:38.857 CC module/event/subsystems/ublk/ublk.o 00:03:38.857 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:38.857 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:38.857 LIB libspdk_event_ublk.a 00:03:38.857 LIB libspdk_event_nbd.a 00:03:38.857 LIB libspdk_event_scsi.a 00:03:39.115 SO libspdk_event_ublk.so.2.0 00:03:39.115 SO libspdk_event_nbd.so.5.0 00:03:39.115 SO libspdk_event_scsi.so.5.0 00:03:39.115 SYMLINK libspdk_event_nbd.so 00:03:39.115 SYMLINK libspdk_event_ublk.so 00:03:39.115 SYMLINK libspdk_event_scsi.so 00:03:39.115 LIB libspdk_event_nvmf.a 00:03:39.115 SO libspdk_event_nvmf.so.5.0 00:03:39.115 SYMLINK libspdk_event_nvmf.so 00:03:39.115 CC module/event/subsystems/iscsi/iscsi.o 00:03:39.115 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:39.373 LIB libspdk_event_vhost_scsi.a 00:03:39.373 LIB libspdk_event_iscsi.a 00:03:39.373 SO libspdk_event_vhost_scsi.so.2.0 00:03:39.373 SO libspdk_event_iscsi.so.5.0 00:03:39.632 SYMLINK libspdk_event_vhost_scsi.so 00:03:39.632 SYMLINK libspdk_event_iscsi.so 00:03:39.632 SO libspdk.so.5.0 00:03:39.632 SYMLINK libspdk.so 00:03:39.891 CXX app/trace/trace.o 00:03:39.891 TEST_HEADER include/spdk/accel.h 00:03:39.891 TEST_HEADER include/spdk/accel_module.h 00:03:39.891 TEST_HEADER include/spdk/assert.h 00:03:39.891 TEST_HEADER include/spdk/barrier.h 00:03:39.891 TEST_HEADER include/spdk/base64.h 00:03:39.891 TEST_HEADER include/spdk/bdev.h 00:03:39.891 TEST_HEADER include/spdk/bdev_module.h 00:03:39.891 TEST_HEADER include/spdk/bdev_zone.h 00:03:39.891 TEST_HEADER include/spdk/bit_array.h 00:03:39.891 TEST_HEADER include/spdk/bit_pool.h 00:03:39.891 TEST_HEADER include/spdk/blob_bdev.h 00:03:39.891 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:39.891 TEST_HEADER include/spdk/blobfs.h 00:03:39.891 TEST_HEADER include/spdk/blob.h 00:03:39.891 TEST_HEADER include/spdk/conf.h 00:03:39.891 TEST_HEADER include/spdk/config.h 00:03:39.891 TEST_HEADER include/spdk/cpuset.h 00:03:39.891 TEST_HEADER include/spdk/crc16.h 00:03:39.891 TEST_HEADER include/spdk/crc32.h 00:03:39.891 TEST_HEADER include/spdk/crc64.h 00:03:39.891 TEST_HEADER include/spdk/dif.h 00:03:39.891 TEST_HEADER include/spdk/dma.h 00:03:39.891 CC test/event/event_perf/event_perf.o 00:03:39.891 TEST_HEADER include/spdk/endian.h 00:03:39.891 TEST_HEADER include/spdk/env_dpdk.h 00:03:39.891 TEST_HEADER include/spdk/env.h 00:03:39.891 TEST_HEADER include/spdk/event.h 00:03:39.891 TEST_HEADER include/spdk/fd_group.h 00:03:39.891 TEST_HEADER include/spdk/fd.h 00:03:39.891 TEST_HEADER include/spdk/file.h 00:03:39.891 TEST_HEADER include/spdk/ftl.h 00:03:39.891 TEST_HEADER include/spdk/gpt_spec.h 00:03:39.891 CC examples/accel/perf/accel_perf.o 00:03:39.891 TEST_HEADER include/spdk/hexlify.h 00:03:39.891 TEST_HEADER include/spdk/histogram_data.h 00:03:39.891 TEST_HEADER include/spdk/idxd.h 00:03:39.891 TEST_HEADER include/spdk/idxd_spec.h 00:03:39.891 TEST_HEADER include/spdk/init.h 00:03:39.891 TEST_HEADER include/spdk/ioat.h 00:03:39.891 TEST_HEADER include/spdk/ioat_spec.h 00:03:39.891 CC test/accel/dif/dif.o 00:03:39.891 CC test/dma/test_dma/test_dma.o 00:03:39.891 TEST_HEADER include/spdk/iscsi_spec.h 00:03:39.891 TEST_HEADER include/spdk/json.h 00:03:39.891 TEST_HEADER include/spdk/jsonrpc.h 00:03:39.891 TEST_HEADER include/spdk/likely.h 00:03:39.891 CC test/bdev/bdevio/bdevio.o 00:03:39.892 TEST_HEADER include/spdk/log.h 00:03:39.892 CC test/blobfs/mkfs/mkfs.o 00:03:39.892 TEST_HEADER include/spdk/lvol.h 00:03:39.892 TEST_HEADER include/spdk/memory.h 00:03:39.892 TEST_HEADER include/spdk/mmio.h 00:03:39.892 TEST_HEADER include/spdk/nbd.h 00:03:39.892 TEST_HEADER include/spdk/notify.h 00:03:39.892 CC test/app/bdev_svc/bdev_svc.o 00:03:39.892 TEST_HEADER include/spdk/nvme.h 00:03:39.892 TEST_HEADER include/spdk/nvme_intel.h 00:03:39.892 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:39.892 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:39.892 TEST_HEADER include/spdk/nvme_spec.h 00:03:39.892 TEST_HEADER include/spdk/nvme_zns.h 00:03:39.892 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:39.892 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:39.892 TEST_HEADER include/spdk/nvmf.h 00:03:39.892 TEST_HEADER include/spdk/nvmf_spec.h 00:03:39.892 CC test/env/mem_callbacks/mem_callbacks.o 00:03:39.892 TEST_HEADER include/spdk/nvmf_transport.h 00:03:39.892 TEST_HEADER include/spdk/opal.h 00:03:39.892 TEST_HEADER include/spdk/opal_spec.h 00:03:39.892 TEST_HEADER include/spdk/pci_ids.h 00:03:39.892 TEST_HEADER include/spdk/pipe.h 00:03:39.892 TEST_HEADER include/spdk/queue.h 00:03:39.892 TEST_HEADER include/spdk/reduce.h 00:03:39.892 TEST_HEADER include/spdk/rpc.h 00:03:39.892 TEST_HEADER include/spdk/scheduler.h 00:03:39.892 TEST_HEADER include/spdk/scsi.h 00:03:39.892 TEST_HEADER include/spdk/scsi_spec.h 00:03:39.892 TEST_HEADER include/spdk/sock.h 00:03:39.892 TEST_HEADER include/spdk/stdinc.h 00:03:39.892 TEST_HEADER include/spdk/string.h 00:03:39.892 TEST_HEADER include/spdk/thread.h 00:03:39.892 TEST_HEADER include/spdk/trace.h 00:03:39.892 TEST_HEADER include/spdk/trace_parser.h 00:03:39.892 TEST_HEADER include/spdk/tree.h 00:03:39.892 TEST_HEADER include/spdk/ublk.h 00:03:39.892 TEST_HEADER include/spdk/util.h 00:03:39.892 TEST_HEADER include/spdk/uuid.h 00:03:39.892 TEST_HEADER include/spdk/version.h 00:03:39.892 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:39.892 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:39.892 TEST_HEADER include/spdk/vhost.h 00:03:39.892 TEST_HEADER include/spdk/vmd.h 00:03:39.892 TEST_HEADER include/spdk/xor.h 00:03:39.892 TEST_HEADER include/spdk/zipf.h 00:03:39.892 CXX test/cpp_headers/accel.o 00:03:40.150 LINK event_perf 00:03:40.150 LINK bdev_svc 00:03:40.150 LINK mkfs 00:03:40.150 CXX test/cpp_headers/accel_module.o 00:03:40.150 CC test/event/reactor/reactor.o 00:03:40.150 LINK spdk_trace 00:03:40.409 LINK test_dma 00:03:40.409 CXX test/cpp_headers/assert.o 00:03:40.409 LINK bdevio 00:03:40.409 LINK reactor 00:03:40.409 LINK dif 00:03:40.409 LINK accel_perf 00:03:40.409 CC app/trace_record/trace_record.o 00:03:40.409 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:40.409 CC test/app/histogram_perf/histogram_perf.o 00:03:40.668 CXX test/cpp_headers/barrier.o 00:03:40.668 CXX test/cpp_headers/base64.o 00:03:40.668 LINK mem_callbacks 00:03:40.668 CC test/event/reactor_perf/reactor_perf.o 00:03:40.668 CXX test/cpp_headers/bdev.o 00:03:40.668 LINK histogram_perf 00:03:40.668 CC app/nvmf_tgt/nvmf_main.o 00:03:40.668 LINK spdk_trace_record 00:03:40.668 CC examples/bdev/hello_world/hello_bdev.o 00:03:40.668 LINK reactor_perf 00:03:40.668 CC test/env/vtophys/vtophys.o 00:03:40.926 CXX test/cpp_headers/bdev_module.o 00:03:40.926 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:40.926 CC examples/blob/hello_world/hello_blob.o 00:03:40.926 LINK nvmf_tgt 00:03:40.926 CC examples/ioat/perf/perf.o 00:03:40.926 LINK vtophys 00:03:40.926 LINK env_dpdk_post_init 00:03:40.926 CC test/event/app_repeat/app_repeat.o 00:03:40.926 LINK nvme_fuzz 00:03:40.926 LINK hello_bdev 00:03:40.926 CC examples/nvme/hello_world/hello_world.o 00:03:40.926 CXX test/cpp_headers/bdev_zone.o 00:03:41.185 LINK app_repeat 00:03:41.185 LINK hello_blob 00:03:41.185 LINK ioat_perf 00:03:41.185 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:41.185 CC test/env/memory/memory_ut.o 00:03:41.185 CC examples/bdev/bdevperf/bdevperf.o 00:03:41.185 LINK hello_world 00:03:41.185 CC app/iscsi_tgt/iscsi_tgt.o 00:03:41.444 CXX test/cpp_headers/bit_array.o 00:03:41.444 CC examples/ioat/verify/verify.o 00:03:41.444 CC test/lvol/esnap/esnap.o 00:03:41.444 CC test/event/scheduler/scheduler.o 00:03:41.444 CC examples/blob/cli/blobcli.o 00:03:41.444 CXX test/cpp_headers/bit_pool.o 00:03:41.444 LINK iscsi_tgt 00:03:41.444 CC examples/nvme/reconnect/reconnect.o 00:03:41.702 CXX test/cpp_headers/blob_bdev.o 00:03:41.702 LINK verify 00:03:41.702 LINK scheduler 00:03:41.961 CC app/spdk_tgt/spdk_tgt.o 00:03:41.961 CXX test/cpp_headers/blobfs_bdev.o 00:03:41.961 CC app/spdk_lspci/spdk_lspci.o 00:03:41.961 LINK reconnect 00:03:41.961 CC test/nvme/aer/aer.o 00:03:41.961 LINK blobcli 00:03:41.961 LINK spdk_tgt 00:03:41.961 CXX test/cpp_headers/blobfs.o 00:03:41.961 LINK spdk_lspci 00:03:42.219 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:42.219 LINK bdevperf 00:03:42.219 CXX test/cpp_headers/blob.o 00:03:42.219 LINK memory_ut 00:03:42.219 CC test/nvme/reset/reset.o 00:03:42.219 CC test/nvme/sgl/sgl.o 00:03:42.219 CC app/spdk_nvme_perf/perf.o 00:03:42.477 LINK aer 00:03:42.477 CXX test/cpp_headers/conf.o 00:03:42.477 CC test/nvme/e2edp/nvme_dp.o 00:03:42.477 CC test/env/pci/pci_ut.o 00:03:42.477 CXX test/cpp_headers/config.o 00:03:42.477 CC test/rpc_client/rpc_client_test.o 00:03:42.477 LINK reset 00:03:42.735 LINK sgl 00:03:42.735 CXX test/cpp_headers/cpuset.o 00:03:42.735 LINK nvme_manage 00:03:42.735 CXX test/cpp_headers/crc16.o 00:03:42.735 LINK rpc_client_test 00:03:42.735 LINK nvme_dp 00:03:42.735 CC test/nvme/overhead/overhead.o 00:03:42.994 CC examples/sock/hello_world/hello_sock.o 00:03:42.994 CXX test/cpp_headers/crc32.o 00:03:42.994 CC examples/nvme/arbitration/arbitration.o 00:03:42.994 LINK pci_ut 00:03:42.994 CC examples/vmd/lsvmd/lsvmd.o 00:03:43.252 CXX test/cpp_headers/crc64.o 00:03:43.252 LINK overhead 00:03:43.252 CC examples/nvmf/nvmf/nvmf.o 00:03:43.252 LINK hello_sock 00:03:43.252 LINK lsvmd 00:03:43.252 CXX test/cpp_headers/dif.o 00:03:43.252 CXX test/cpp_headers/dma.o 00:03:43.252 LINK spdk_nvme_perf 00:03:43.252 CXX test/cpp_headers/endian.o 00:03:43.252 CC test/nvme/err_injection/err_injection.o 00:03:43.513 LINK arbitration 00:03:43.513 LINK iscsi_fuzz 00:03:43.513 CC examples/vmd/led/led.o 00:03:43.513 CXX test/cpp_headers/env_dpdk.o 00:03:43.513 LINK nvmf 00:03:43.513 CC app/spdk_nvme_identify/identify.o 00:03:43.513 CC app/spdk_nvme_discover/discovery_aer.o 00:03:43.513 LINK err_injection 00:03:43.513 LINK led 00:03:43.513 CC examples/nvme/hotplug/hotplug.o 00:03:43.771 CC examples/util/zipf/zipf.o 00:03:43.771 CXX test/cpp_headers/env.o 00:03:43.771 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:43.771 CC app/spdk_top/spdk_top.o 00:03:43.771 LINK spdk_nvme_discover 00:03:43.771 CC test/nvme/startup/startup.o 00:03:43.771 LINK zipf 00:03:43.771 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:43.771 CXX test/cpp_headers/event.o 00:03:44.029 LINK hotplug 00:03:44.029 CXX test/cpp_headers/fd_group.o 00:03:44.029 CC examples/thread/thread/thread_ex.o 00:03:44.029 LINK startup 00:03:44.029 CC examples/idxd/perf/perf.o 00:03:44.288 CXX test/cpp_headers/fd.o 00:03:44.288 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:44.288 CC test/thread/poller_perf/poller_perf.o 00:03:44.288 CC test/nvme/reserve/reserve.o 00:03:44.288 LINK thread 00:03:44.288 CXX test/cpp_headers/file.o 00:03:44.288 LINK vhost_fuzz 00:03:44.288 LINK poller_perf 00:03:44.288 LINK cmb_copy 00:03:44.547 LINK reserve 00:03:44.547 CXX test/cpp_headers/ftl.o 00:03:44.547 LINK idxd_perf 00:03:44.547 CC test/nvme/simple_copy/simple_copy.o 00:03:44.547 LINK spdk_nvme_identify 00:03:44.547 CC test/app/jsoncat/jsoncat.o 00:03:44.547 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:44.547 CC examples/nvme/abort/abort.o 00:03:44.805 CC test/nvme/connect_stress/connect_stress.o 00:03:44.805 CXX test/cpp_headers/gpt_spec.o 00:03:44.805 LINK jsoncat 00:03:44.805 CC test/nvme/boot_partition/boot_partition.o 00:03:44.805 LINK interrupt_tgt 00:03:44.805 CC test/nvme/compliance/nvme_compliance.o 00:03:44.805 LINK simple_copy 00:03:44.805 LINK spdk_top 00:03:44.805 CXX test/cpp_headers/hexlify.o 00:03:44.805 LINK connect_stress 00:03:44.805 CC test/app/stub/stub.o 00:03:45.063 LINK boot_partition 00:03:45.063 CC test/nvme/fused_ordering/fused_ordering.o 00:03:45.063 CXX test/cpp_headers/histogram_data.o 00:03:45.063 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:45.063 LINK abort 00:03:45.063 LINK stub 00:03:45.063 CC test/nvme/fdp/fdp.o 00:03:45.063 CC app/vhost/vhost.o 00:03:45.333 LINK nvme_compliance 00:03:45.333 CC app/spdk_dd/spdk_dd.o 00:03:45.333 CXX test/cpp_headers/idxd.o 00:03:45.333 LINK fused_ordering 00:03:45.333 LINK doorbell_aers 00:03:45.333 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:45.333 LINK vhost 00:03:45.333 CXX test/cpp_headers/idxd_spec.o 00:03:45.333 CC test/nvme/cuse/cuse.o 00:03:45.333 CC app/fio/nvme/fio_plugin.o 00:03:45.333 CXX test/cpp_headers/init.o 00:03:45.333 CXX test/cpp_headers/ioat.o 00:03:45.594 LINK pmr_persistence 00:03:45.594 LINK fdp 00:03:45.594 CXX test/cpp_headers/ioat_spec.o 00:03:45.594 CXX test/cpp_headers/iscsi_spec.o 00:03:45.594 LINK spdk_dd 00:03:45.594 CXX test/cpp_headers/json.o 00:03:45.594 CXX test/cpp_headers/jsonrpc.o 00:03:45.594 CXX test/cpp_headers/likely.o 00:03:45.594 CC app/fio/bdev/fio_plugin.o 00:03:45.852 CXX test/cpp_headers/log.o 00:03:45.852 CXX test/cpp_headers/lvol.o 00:03:45.852 CXX test/cpp_headers/memory.o 00:03:45.852 CXX test/cpp_headers/mmio.o 00:03:45.852 CXX test/cpp_headers/nbd.o 00:03:45.852 CXX test/cpp_headers/notify.o 00:03:45.852 CXX test/cpp_headers/nvme.o 00:03:45.852 CXX test/cpp_headers/nvme_intel.o 00:03:45.852 CXX test/cpp_headers/nvme_ocssd.o 00:03:45.852 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:46.111 CXX test/cpp_headers/nvme_spec.o 00:03:46.111 CXX test/cpp_headers/nvme_zns.o 00:03:46.111 CXX test/cpp_headers/nvmf_cmd.o 00:03:46.111 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:46.111 LINK spdk_nvme 00:03:46.111 CXX test/cpp_headers/nvmf.o 00:03:46.111 CXX test/cpp_headers/nvmf_spec.o 00:03:46.111 CXX test/cpp_headers/nvmf_transport.o 00:03:46.111 CXX test/cpp_headers/opal.o 00:03:46.111 CXX test/cpp_headers/opal_spec.o 00:03:46.370 LINK spdk_bdev 00:03:46.370 CXX test/cpp_headers/pci_ids.o 00:03:46.370 CXX test/cpp_headers/pipe.o 00:03:46.370 CXX test/cpp_headers/queue.o 00:03:46.370 CXX test/cpp_headers/reduce.o 00:03:46.370 CXX test/cpp_headers/rpc.o 00:03:46.370 CXX test/cpp_headers/scheduler.o 00:03:46.370 CXX test/cpp_headers/scsi.o 00:03:46.370 CXX test/cpp_headers/scsi_spec.o 00:03:46.370 CXX test/cpp_headers/sock.o 00:03:46.370 CXX test/cpp_headers/stdinc.o 00:03:46.370 CXX test/cpp_headers/string.o 00:03:46.628 CXX test/cpp_headers/thread.o 00:03:46.628 CXX test/cpp_headers/trace.o 00:03:46.628 CXX test/cpp_headers/trace_parser.o 00:03:46.628 CXX test/cpp_headers/tree.o 00:03:46.628 CXX test/cpp_headers/ublk.o 00:03:46.628 CXX test/cpp_headers/util.o 00:03:46.628 CXX test/cpp_headers/uuid.o 00:03:46.628 LINK cuse 00:03:46.628 CXX test/cpp_headers/version.o 00:03:46.628 CXX test/cpp_headers/vfio_user_pci.o 00:03:46.628 CXX test/cpp_headers/vfio_user_spec.o 00:03:46.628 CXX test/cpp_headers/vhost.o 00:03:46.628 CXX test/cpp_headers/vmd.o 00:03:46.628 CXX test/cpp_headers/xor.o 00:03:46.886 CXX test/cpp_headers/zipf.o 00:03:47.452 LINK esnap 00:03:47.711 00:03:47.711 real 1m10.534s 00:03:47.711 user 7m12.362s 00:03:47.711 sys 1m24.614s 00:03:47.711 ************************************ 00:03:47.711 END TEST make 00:03:47.711 ************************************ 00:03:47.711 04:43:54 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:47.711 04:43:54 -- common/autotest_common.sh@10 -- $ set +x 00:03:47.969 04:43:54 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:47.969 04:43:54 -- nvmf/common.sh@7 -- # uname -s 00:03:47.969 04:43:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:47.969 04:43:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:47.969 04:43:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:47.969 04:43:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:47.969 04:43:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:47.969 04:43:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:47.969 04:43:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:47.969 04:43:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:47.969 04:43:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:47.969 04:43:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:47.969 04:43:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d2d63dd8-27f0-4005-943a-9616d5238cfe 00:03:47.969 04:43:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=d2d63dd8-27f0-4005-943a-9616d5238cfe 00:03:47.969 04:43:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:47.969 04:43:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:47.969 04:43:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:47.969 04:43:54 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:47.969 04:43:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:47.969 04:43:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:47.969 04:43:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:47.969 04:43:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:47.969 04:43:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:47.969 04:43:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:47.969 04:43:54 -- paths/export.sh@5 -- # export PATH 00:03:47.969 04:43:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:47.969 04:43:54 -- nvmf/common.sh@46 -- # : 0 00:03:47.969 04:43:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:47.969 04:43:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:47.969 04:43:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:47.969 04:43:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:47.969 04:43:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:47.969 04:43:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:47.969 04:43:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:47.969 04:43:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:47.969 04:43:54 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:47.969 04:43:54 -- spdk/autotest.sh@32 -- # uname -s 00:03:47.969 04:43:54 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:47.969 04:43:54 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:47.969 04:43:54 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:47.969 04:43:54 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:47.969 04:43:54 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:47.969 04:43:54 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:47.969 04:43:55 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:47.969 04:43:55 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:47.969 04:43:55 -- spdk/autotest.sh@48 -- # udevadm_pid=48362 00:03:47.969 04:43:55 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:47.969 04:43:55 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:03:47.969 04:43:55 -- spdk/autotest.sh@54 -- # echo 48365 00:03:47.969 04:43:55 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:47.969 04:43:55 -- spdk/autotest.sh@56 -- # echo 48368 00:03:47.969 04:43:55 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:47.969 04:43:55 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:03:47.969 04:43:55 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:47.969 04:43:55 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:03:47.969 04:43:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:47.969 04:43:55 -- common/autotest_common.sh@10 -- # set +x 00:03:47.969 04:43:55 -- spdk/autotest.sh@70 -- # create_test_list 00:03:47.969 04:43:55 -- common/autotest_common.sh@736 -- # xtrace_disable 00:03:47.969 04:43:55 -- common/autotest_common.sh@10 -- # set +x 00:03:47.969 04:43:55 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:47.969 04:43:55 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:47.969 04:43:55 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:03:47.969 04:43:55 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:47.969 04:43:55 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:03:47.969 04:43:55 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:03:47.969 04:43:55 -- common/autotest_common.sh@1440 -- # uname 00:03:47.969 04:43:55 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:03:47.969 04:43:55 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:03:47.969 04:43:55 -- common/autotest_common.sh@1460 -- # uname 00:03:47.969 04:43:55 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:03:48.228 04:43:55 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:03:48.228 04:43:55 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:03:48.228 04:43:55 -- spdk/autotest.sh@83 -- # hash lcov 00:03:48.228 04:43:55 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:48.228 04:43:55 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:03:48.228 --rc lcov_branch_coverage=1 00:03:48.228 --rc lcov_function_coverage=1 00:03:48.228 --rc genhtml_branch_coverage=1 00:03:48.228 --rc genhtml_function_coverage=1 00:03:48.228 --rc genhtml_legend=1 00:03:48.228 --rc geninfo_all_blocks=1 00:03:48.228 ' 00:03:48.228 04:43:55 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:03:48.228 --rc lcov_branch_coverage=1 00:03:48.228 --rc lcov_function_coverage=1 00:03:48.228 --rc genhtml_branch_coverage=1 00:03:48.228 --rc genhtml_function_coverage=1 00:03:48.228 --rc genhtml_legend=1 00:03:48.228 --rc geninfo_all_blocks=1 00:03:48.228 ' 00:03:48.228 04:43:55 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:03:48.228 --rc lcov_branch_coverage=1 00:03:48.228 --rc lcov_function_coverage=1 00:03:48.228 --rc genhtml_branch_coverage=1 00:03:48.228 --rc genhtml_function_coverage=1 00:03:48.228 --rc genhtml_legend=1 00:03:48.228 --rc geninfo_all_blocks=1 00:03:48.228 --no-external' 00:03:48.228 04:43:55 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:03:48.228 --rc lcov_branch_coverage=1 00:03:48.228 --rc lcov_function_coverage=1 00:03:48.228 --rc genhtml_branch_coverage=1 00:03:48.228 --rc genhtml_function_coverage=1 00:03:48.228 --rc genhtml_legend=1 00:03:48.228 --rc geninfo_all_blocks=1 00:03:48.228 --no-external' 00:03:48.228 04:43:55 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:48.228 lcov: LCOV version 1.14 00:03:48.228 04:43:55 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:56.344 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:56.344 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:56.344 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:56.344 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:56.344 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:56.344 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:11.229 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:11.229 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:11.229 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:11.229 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:11.229 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:11.229 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:11.229 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:11.229 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:11.229 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:11.229 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:11.229 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:11.229 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:11.229 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:11.229 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:11.229 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:11.230 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:11.230 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:11.230 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:11.230 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:11.230 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:11.230 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:11.230 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:11.230 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:11.230 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:11.230 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:11.230 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:11.230 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:11.230 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:11.230 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:11.230 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:11.230 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:11.230 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:11.230 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:11.230 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:11.230 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:11.230 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:11.230 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:11.230 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:11.230 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:11.230 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:11.230 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:11.230 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:11.230 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:11.230 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:11.230 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:11.230 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:11.230 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:11.230 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:11.230 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:11.230 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:11.230 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:11.230 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:11.230 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:11.230 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:11.230 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:11.230 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:11.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:11.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:11.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:11.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:11.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:11.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:11.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:11.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:11.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:11.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:11.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:11.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:11.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:11.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:11.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:11.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:11.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:11.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:11.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:11.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:11.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:11.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:11.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:11.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:11.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:11.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:11.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:11.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:11.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:11.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:11.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:11.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:11.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:11.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:11.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:11.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:11.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:11.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:11.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:11.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:11.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:11.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:11.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:11.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:11.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:11.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:11.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:11.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:11.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:11.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:11.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:11.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:11.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:11.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:11.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:11.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:11.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:11.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:11.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:11.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:11.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:11.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:11.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:11.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:11.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:11.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:11.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:11.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:11.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:11.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:11.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:11.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:11.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:11.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:11.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:11.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:11.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:11.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:11.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:11.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:11.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:11.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:11.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:11.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:11.749 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:11.749 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:11.749 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:11.749 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:11.749 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:11.749 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:11.749 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:11.749 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:11.749 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:11.749 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:11.749 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:11.749 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:11.749 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:11.749 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:11.749 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:11.749 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:11.749 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:11.749 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:11.749 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:11.749 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:11.749 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:11.749 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:11.749 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:11.749 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:11.749 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:11.749 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:11.749 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:11.749 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:11.749 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:11.749 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:11.749 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:11.749 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:15.035 04:44:21 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:04:15.035 04:44:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:15.035 04:44:21 -- common/autotest_common.sh@10 -- # set +x 00:04:15.035 04:44:21 -- spdk/autotest.sh@102 -- # rm -f 00:04:15.035 04:44:21 -- spdk/autotest.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:15.974 lsblk: /dev/nvme2c2n1: not a block device 00:04:15.974 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:16.234 0000:00:09.0 (1b36 0010): Already using the nvme driver 00:04:16.234 0000:00:08.0 (1b36 0010): Already using the nvme driver 00:04:16.234 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:04:16.234 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:04:16.234 04:44:23 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:04:16.234 04:44:23 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:16.234 04:44:23 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:16.234 04:44:23 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:16.234 04:44:23 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:16.234 04:44:23 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:16.234 04:44:23 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:16.234 04:44:23 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:16.234 04:44:23 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:16.234 04:44:23 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:16.234 04:44:23 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:04:16.234 04:44:23 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:04:16.234 04:44:23 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:16.234 04:44:23 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:16.234 04:44:23 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:16.234 04:44:23 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2c2n1 00:04:16.234 04:44:23 -- common/autotest_common.sh@1647 -- # local device=nvme2c2n1 00:04:16.234 04:44:23 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:04:16.234 04:44:23 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:16.234 04:44:23 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:16.234 04:44:23 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n1 00:04:16.234 04:44:23 -- common/autotest_common.sh@1647 -- # local device=nvme2n1 00:04:16.234 04:44:23 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:16.234 04:44:23 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:16.234 04:44:23 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:16.234 04:44:23 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3n1 00:04:16.234 04:44:23 -- common/autotest_common.sh@1647 -- # local device=nvme3n1 00:04:16.234 04:44:23 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:16.234 04:44:23 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:16.234 04:44:23 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:16.234 04:44:23 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3n2 00:04:16.234 04:44:23 -- common/autotest_common.sh@1647 -- # local device=nvme3n2 00:04:16.234 04:44:23 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3n2/queue/zoned ]] 00:04:16.234 04:44:23 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:16.234 04:44:23 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:16.234 04:44:23 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3n3 00:04:16.234 04:44:23 -- common/autotest_common.sh@1647 -- # local device=nvme3n3 00:04:16.234 04:44:23 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3n3/queue/zoned ]] 00:04:16.234 04:44:23 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:16.234 04:44:23 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:04:16.234 04:44:23 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1 /dev/nvme3n2 /dev/nvme3n3 00:04:16.234 04:44:23 -- spdk/autotest.sh@121 -- # grep -v p 00:04:16.234 04:44:23 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:16.234 04:44:23 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:16.234 04:44:23 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:04:16.234 04:44:23 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:16.234 04:44:23 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:16.234 No valid GPT data, bailing 00:04:16.234 04:44:23 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:16.234 04:44:23 -- scripts/common.sh@393 -- # pt= 00:04:16.234 04:44:23 -- scripts/common.sh@394 -- # return 1 00:04:16.234 04:44:23 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:16.234 1+0 records in 00:04:16.234 1+0 records out 00:04:16.234 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00447105 s, 235 MB/s 00:04:16.234 04:44:23 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:16.234 04:44:23 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:16.234 04:44:23 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n1 00:04:16.234 04:44:23 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:04:16.234 04:44:23 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:16.493 No valid GPT data, bailing 00:04:16.493 04:44:23 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:16.493 04:44:23 -- scripts/common.sh@393 -- # pt= 00:04:16.493 04:44:23 -- scripts/common.sh@394 -- # return 1 00:04:16.494 04:44:23 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:16.494 1+0 records in 00:04:16.494 1+0 records out 00:04:16.494 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142558 s, 73.6 MB/s 00:04:16.494 04:44:23 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:16.494 04:44:23 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:16.494 04:44:23 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme2n1 00:04:16.494 04:44:23 -- scripts/common.sh@380 -- # local block=/dev/nvme2n1 pt 00:04:16.494 04:44:23 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:16.494 No valid GPT data, bailing 00:04:16.494 04:44:23 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:16.494 04:44:23 -- scripts/common.sh@393 -- # pt= 00:04:16.494 04:44:23 -- scripts/common.sh@394 -- # return 1 00:04:16.494 04:44:23 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:16.494 1+0 records in 00:04:16.494 1+0 records out 00:04:16.494 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00435851 s, 241 MB/s 00:04:16.494 04:44:23 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:16.494 04:44:23 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:16.494 04:44:23 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme3n1 00:04:16.494 04:44:23 -- scripts/common.sh@380 -- # local block=/dev/nvme3n1 pt 00:04:16.494 04:44:23 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:16.494 No valid GPT data, bailing 00:04:16.494 04:44:23 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:16.494 04:44:23 -- scripts/common.sh@393 -- # pt= 00:04:16.494 04:44:23 -- scripts/common.sh@394 -- # return 1 00:04:16.494 04:44:23 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:16.494 1+0 records in 00:04:16.494 1+0 records out 00:04:16.494 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00407727 s, 257 MB/s 00:04:16.494 04:44:23 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:16.494 04:44:23 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:16.494 04:44:23 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme3n2 00:04:16.494 04:44:23 -- scripts/common.sh@380 -- # local block=/dev/nvme3n2 pt 00:04:16.494 04:44:23 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n2 00:04:16.753 No valid GPT data, bailing 00:04:16.753 04:44:23 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n2 00:04:16.753 04:44:23 -- scripts/common.sh@393 -- # pt= 00:04:16.753 04:44:23 -- scripts/common.sh@394 -- # return 1 00:04:16.753 04:44:23 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme3n2 bs=1M count=1 00:04:16.753 1+0 records in 00:04:16.753 1+0 records out 00:04:16.753 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00431986 s, 243 MB/s 00:04:16.753 04:44:23 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:16.753 04:44:23 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:16.753 04:44:23 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme3n3 00:04:16.753 04:44:23 -- scripts/common.sh@380 -- # local block=/dev/nvme3n3 pt 00:04:16.753 04:44:23 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n3 00:04:16.753 No valid GPT data, bailing 00:04:16.753 04:44:23 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n3 00:04:16.753 04:44:23 -- scripts/common.sh@393 -- # pt= 00:04:16.753 04:44:23 -- scripts/common.sh@394 -- # return 1 00:04:16.753 04:44:23 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme3n3 bs=1M count=1 00:04:16.753 1+0 records in 00:04:16.753 1+0 records out 00:04:16.753 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00438623 s, 239 MB/s 00:04:16.753 04:44:23 -- spdk/autotest.sh@129 -- # sync 00:04:17.012 04:44:23 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:17.012 04:44:23 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:17.012 04:44:23 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:18.942 04:44:25 -- spdk/autotest.sh@135 -- # uname -s 00:04:18.942 04:44:25 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:04:18.942 04:44:25 -- spdk/autotest.sh@136 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:18.942 04:44:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:18.942 04:44:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:18.942 04:44:25 -- common/autotest_common.sh@10 -- # set +x 00:04:18.942 ************************************ 00:04:18.942 START TEST setup.sh 00:04:18.942 ************************************ 00:04:18.942 04:44:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:18.942 * Looking for test storage... 00:04:18.942 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:18.942 04:44:25 -- setup/test-setup.sh@10 -- # uname -s 00:04:18.942 04:44:25 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:18.942 04:44:25 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:18.942 04:44:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:18.942 04:44:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:18.942 04:44:25 -- common/autotest_common.sh@10 -- # set +x 00:04:18.942 ************************************ 00:04:18.942 START TEST acl 00:04:18.942 ************************************ 00:04:18.942 04:44:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:18.942 * Looking for test storage... 00:04:18.942 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:18.942 04:44:26 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:18.942 04:44:26 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:18.942 04:44:26 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:18.942 04:44:26 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:18.942 04:44:26 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:18.942 04:44:26 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:18.942 04:44:26 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:18.942 04:44:26 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:18.942 04:44:26 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:18.942 04:44:26 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:18.942 04:44:26 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:04:18.942 04:44:26 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:04:18.942 04:44:26 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:18.942 04:44:26 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:18.942 04:44:26 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:18.942 04:44:26 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2c2n1 00:04:18.942 04:44:26 -- common/autotest_common.sh@1647 -- # local device=nvme2c2n1 00:04:18.942 04:44:26 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:04:18.942 04:44:26 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:18.942 04:44:26 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:18.942 04:44:26 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n1 00:04:18.942 04:44:26 -- common/autotest_common.sh@1647 -- # local device=nvme2n1 00:04:18.942 04:44:26 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:18.942 04:44:26 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:18.942 04:44:26 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:18.942 04:44:26 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3n1 00:04:18.942 04:44:26 -- common/autotest_common.sh@1647 -- # local device=nvme3n1 00:04:18.942 04:44:26 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:18.942 04:44:26 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:18.942 04:44:26 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:18.942 04:44:26 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3n2 00:04:18.942 04:44:26 -- common/autotest_common.sh@1647 -- # local device=nvme3n2 00:04:18.942 04:44:26 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3n2/queue/zoned ]] 00:04:18.942 04:44:26 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:18.942 04:44:26 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:18.942 04:44:26 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3n3 00:04:18.942 04:44:26 -- common/autotest_common.sh@1647 -- # local device=nvme3n3 00:04:18.942 04:44:26 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3n3/queue/zoned ]] 00:04:18.942 04:44:26 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:18.942 04:44:26 -- setup/acl.sh@12 -- # devs=() 00:04:18.942 04:44:26 -- setup/acl.sh@12 -- # declare -a devs 00:04:18.942 04:44:26 -- setup/acl.sh@13 -- # drivers=() 00:04:18.942 04:44:26 -- setup/acl.sh@13 -- # declare -A drivers 00:04:18.942 04:44:26 -- setup/acl.sh@51 -- # setup reset 00:04:18.942 04:44:26 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:18.942 04:44:26 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:20.319 04:44:27 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:20.319 04:44:27 -- setup/acl.sh@16 -- # local dev driver 00:04:20.319 04:44:27 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.319 04:44:27 -- setup/acl.sh@15 -- # setup output status 00:04:20.319 04:44:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.319 04:44:27 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:20.320 Hugepages 00:04:20.320 node hugesize free / total 00:04:20.579 04:44:27 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:20.579 04:44:27 -- setup/acl.sh@19 -- # continue 00:04:20.580 04:44:27 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.580 00:04:20.580 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:20.580 04:44:27 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:20.580 04:44:27 -- setup/acl.sh@19 -- # continue 00:04:20.580 04:44:27 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.580 04:44:27 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:20.580 04:44:27 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:20.580 04:44:27 -- setup/acl.sh@20 -- # continue 00:04:20.580 04:44:27 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.580 04:44:27 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:04:20.580 04:44:27 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:20.580 04:44:27 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:20.580 04:44:27 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:20.580 04:44:27 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:20.580 04:44:27 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.841 04:44:27 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:04:20.841 04:44:27 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:20.841 04:44:27 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:20.841 04:44:27 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:20.841 04:44:27 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:20.841 04:44:27 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.841 04:44:27 -- setup/acl.sh@19 -- # [[ 0000:00:08.0 == *:*:*.* ]] 00:04:20.841 04:44:27 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:20.841 04:44:27 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:20.841 04:44:27 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:20.841 04:44:27 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:20.841 04:44:27 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:21.098 04:44:27 -- setup/acl.sh@19 -- # [[ 0000:00:09.0 == *:*:*.* ]] 00:04:21.098 04:44:27 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:21.098 04:44:27 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\9\.\0* ]] 00:04:21.098 04:44:27 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:21.098 04:44:27 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:21.098 04:44:27 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:21.098 04:44:27 -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:04:21.098 04:44:27 -- setup/acl.sh@54 -- # run_test denied denied 00:04:21.098 04:44:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:21.098 04:44:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:21.098 04:44:27 -- common/autotest_common.sh@10 -- # set +x 00:04:21.098 ************************************ 00:04:21.098 START TEST denied 00:04:21.098 ************************************ 00:04:21.099 04:44:27 -- common/autotest_common.sh@1104 -- # denied 00:04:21.099 04:44:27 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:04:21.099 04:44:27 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:04:21.099 04:44:27 -- setup/acl.sh@38 -- # setup output config 00:04:21.099 04:44:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.099 04:44:27 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:22.034 lsblk: /dev/nvme2c2n1: not a block device 00:04:22.292 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:04:22.292 04:44:29 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:04:22.292 04:44:29 -- setup/acl.sh@28 -- # local dev driver 00:04:22.292 04:44:29 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:22.292 04:44:29 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:04:22.292 04:44:29 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:04:22.292 04:44:29 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:22.292 04:44:29 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:22.292 04:44:29 -- setup/acl.sh@41 -- # setup reset 00:04:22.292 04:44:29 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:22.292 04:44:29 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:28.857 00:04:28.857 real 0m7.380s 00:04:28.857 user 0m0.901s 00:04:28.857 sys 0m1.544s 00:04:28.857 04:44:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:28.857 04:44:35 -- common/autotest_common.sh@10 -- # set +x 00:04:28.857 ************************************ 00:04:28.857 END TEST denied 00:04:28.857 ************************************ 00:04:28.857 04:44:35 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:28.857 04:44:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:28.857 04:44:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:28.857 04:44:35 -- common/autotest_common.sh@10 -- # set +x 00:04:28.857 ************************************ 00:04:28.857 START TEST allowed 00:04:28.857 ************************************ 00:04:28.857 04:44:35 -- common/autotest_common.sh@1104 -- # allowed 00:04:28.857 04:44:35 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:04:28.857 04:44:35 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:04:28.857 04:44:35 -- setup/acl.sh@45 -- # setup output config 00:04:28.857 04:44:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.857 04:44:35 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:29.425 lsblk: /dev/nvme0c0n1: not a block device 00:04:29.684 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:29.684 04:44:36 -- setup/acl.sh@47 -- # verify 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:04:29.684 04:44:36 -- setup/acl.sh@28 -- # local dev driver 00:04:29.684 04:44:36 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:29.684 04:44:36 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:04:29.684 04:44:36 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:04:29.684 04:44:36 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:29.684 04:44:36 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:29.684 04:44:36 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:29.684 04:44:36 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:08.0 ]] 00:04:29.684 04:44:36 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:08.0/driver 00:04:29.684 04:44:36 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:29.684 04:44:36 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:29.684 04:44:36 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:29.684 04:44:36 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:09.0 ]] 00:04:29.684 04:44:36 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:09.0/driver 00:04:29.684 04:44:36 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:29.684 04:44:36 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:29.684 04:44:36 -- setup/acl.sh@48 -- # setup reset 00:04:29.684 04:44:36 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:29.684 04:44:36 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:31.060 00:04:31.060 real 0m2.460s 00:04:31.060 user 0m1.107s 00:04:31.060 sys 0m1.338s 00:04:31.060 04:44:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.060 04:44:37 -- common/autotest_common.sh@10 -- # set +x 00:04:31.060 ************************************ 00:04:31.060 END TEST allowed 00:04:31.060 ************************************ 00:04:31.060 00:04:31.060 real 0m11.983s 00:04:31.060 user 0m2.947s 00:04:31.060 sys 0m4.123s 00:04:31.060 04:44:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.060 04:44:37 -- common/autotest_common.sh@10 -- # set +x 00:04:31.060 ************************************ 00:04:31.060 END TEST acl 00:04:31.060 ************************************ 00:04:31.060 04:44:37 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:31.061 04:44:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:31.061 04:44:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:31.061 04:44:37 -- common/autotest_common.sh@10 -- # set +x 00:04:31.061 ************************************ 00:04:31.061 START TEST hugepages 00:04:31.061 ************************************ 00:04:31.061 04:44:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:31.061 * Looking for test storage... 00:04:31.061 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:31.061 04:44:38 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:31.061 04:44:38 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:31.061 04:44:38 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:31.061 04:44:38 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:31.061 04:44:38 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:31.061 04:44:38 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:31.061 04:44:38 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:31.061 04:44:38 -- setup/common.sh@18 -- # local node= 00:04:31.061 04:44:38 -- setup/common.sh@19 -- # local var val 00:04:31.061 04:44:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:31.061 04:44:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.061 04:44:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.061 04:44:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.061 04:44:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.061 04:44:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 04:44:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 5445116 kB' 'MemAvailable: 7421436 kB' 'Buffers: 3456 kB' 'Cached: 2187148 kB' 'SwapCached: 0 kB' 'Active: 840876 kB' 'Inactive: 1450008 kB' 'Active(anon): 110792 kB' 'Inactive(anon): 0 kB' 'Active(file): 730084 kB' 'Inactive(file): 1450008 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 101952 kB' 'Mapped: 51424 kB' 'Shmem: 10512 kB' 'KReclaimable: 66336 kB' 'Slab: 139716 kB' 'SReclaimable: 66336 kB' 'SUnreclaim: 73380 kB' 'KernelStack: 6508 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 326112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # continue 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # continue 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # continue 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # continue 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # continue 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # continue 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # continue 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # continue 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # continue 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # continue 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # continue 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # continue 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # continue 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # continue 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # continue 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # continue 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # continue 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # continue 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # continue 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # continue 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # continue 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # continue 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # continue 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # continue 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # continue 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # continue 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # continue 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # continue 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # continue 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # continue 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 04:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.061 04:44:38 -- setup/common.sh@32 -- # continue 00:04:31.062 04:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.062 04:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.062 04:44:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.062 04:44:38 -- setup/common.sh@32 -- # continue 00:04:31.062 04:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.062 04:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.062 04:44:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.062 04:44:38 -- setup/common.sh@32 -- # continue 00:04:31.062 04:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.062 04:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.062 04:44:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.062 04:44:38 -- setup/common.sh@32 -- # continue 00:04:31.062 04:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.062 04:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.062 04:44:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.062 04:44:38 -- setup/common.sh@32 -- # continue 00:04:31.062 04:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.062 04:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.062 04:44:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.062 04:44:38 -- setup/common.sh@32 -- # continue 00:04:31.062 04:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.062 04:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.062 04:44:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.062 04:44:38 -- setup/common.sh@32 -- # continue 00:04:31.062 04:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.062 04:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.062 04:44:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.062 04:44:38 -- setup/common.sh@32 -- # continue 00:04:31.062 04:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.062 04:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.062 04:44:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.062 04:44:38 -- setup/common.sh@32 -- # continue 00:04:31.062 04:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.062 04:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.062 04:44:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.062 04:44:38 -- setup/common.sh@32 -- # continue 00:04:31.062 04:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.062 04:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.062 04:44:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.062 04:44:38 -- setup/common.sh@32 -- # continue 00:04:31.062 04:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.062 04:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.062 04:44:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.062 04:44:38 -- setup/common.sh@32 -- # continue 00:04:31.062 04:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.062 04:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.062 04:44:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.062 04:44:38 -- setup/common.sh@32 -- # continue 00:04:31.062 04:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.062 04:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.062 04:44:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.062 04:44:38 -- setup/common.sh@32 -- # continue 00:04:31.062 04:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.062 04:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.062 04:44:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.062 04:44:38 -- setup/common.sh@32 -- # continue 00:04:31.062 04:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.062 04:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.062 04:44:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.062 04:44:38 -- setup/common.sh@32 -- # continue 00:04:31.062 04:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.062 04:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.062 04:44:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.062 04:44:38 -- setup/common.sh@32 -- # continue 00:04:31.062 04:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.062 04:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.062 04:44:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.062 04:44:38 -- setup/common.sh@32 -- # continue 00:04:31.062 04:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.062 04:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.062 04:44:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.062 04:44:38 -- setup/common.sh@32 -- # continue 00:04:31.062 04:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.062 04:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.062 04:44:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.062 04:44:38 -- setup/common.sh@32 -- # continue 00:04:31.062 04:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.062 04:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.062 04:44:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.062 04:44:38 -- setup/common.sh@32 -- # continue 00:04:31.062 04:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.062 04:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.062 04:44:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.062 04:44:38 -- setup/common.sh@32 -- # continue 00:04:31.062 04:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.062 04:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.062 04:44:38 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.062 04:44:38 -- setup/common.sh@33 -- # echo 2048 00:04:31.062 04:44:38 -- setup/common.sh@33 -- # return 0 00:04:31.062 04:44:38 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:31.062 04:44:38 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:31.062 04:44:38 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:31.062 04:44:38 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:31.062 04:44:38 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:31.062 04:44:38 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:31.062 04:44:38 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:31.062 04:44:38 -- setup/hugepages.sh@207 -- # get_nodes 00:04:31.062 04:44:38 -- setup/hugepages.sh@27 -- # local node 00:04:31.062 04:44:38 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:31.062 04:44:38 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:31.062 04:44:38 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:31.062 04:44:38 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:31.062 04:44:38 -- setup/hugepages.sh@208 -- # clear_hp 00:04:31.062 04:44:38 -- setup/hugepages.sh@37 -- # local node hp 00:04:31.062 04:44:38 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:31.062 04:44:38 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:31.062 04:44:38 -- setup/hugepages.sh@41 -- # echo 0 00:04:31.062 04:44:38 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:31.062 04:44:38 -- setup/hugepages.sh@41 -- # echo 0 00:04:31.062 04:44:38 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:31.062 04:44:38 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:31.062 04:44:38 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:31.062 04:44:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:31.062 04:44:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:31.062 04:44:38 -- common/autotest_common.sh@10 -- # set +x 00:04:31.062 ************************************ 00:04:31.062 START TEST default_setup 00:04:31.062 ************************************ 00:04:31.062 04:44:38 -- common/autotest_common.sh@1104 -- # default_setup 00:04:31.062 04:44:38 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:31.062 04:44:38 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:31.062 04:44:38 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:31.062 04:44:38 -- setup/hugepages.sh@51 -- # shift 00:04:31.062 04:44:38 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:31.062 04:44:38 -- setup/hugepages.sh@52 -- # local node_ids 00:04:31.062 04:44:38 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:31.062 04:44:38 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:31.062 04:44:38 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:31.062 04:44:38 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:31.062 04:44:38 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:31.062 04:44:38 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:31.062 04:44:38 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:31.062 04:44:38 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:31.062 04:44:38 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:31.062 04:44:38 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:31.062 04:44:38 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:31.062 04:44:38 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:31.062 04:44:38 -- setup/hugepages.sh@73 -- # return 0 00:04:31.062 04:44:38 -- setup/hugepages.sh@137 -- # setup output 00:04:31.062 04:44:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.062 04:44:38 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:31.997 lsblk: /dev/nvme0c0n1: not a block device 00:04:32.256 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:32.256 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:32.256 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:04:32.256 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:32.519 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:04:32.519 04:44:39 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:32.519 04:44:39 -- setup/hugepages.sh@89 -- # local node 00:04:32.519 04:44:39 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:32.519 04:44:39 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:32.519 04:44:39 -- setup/hugepages.sh@92 -- # local surp 00:04:32.519 04:44:39 -- setup/hugepages.sh@93 -- # local resv 00:04:32.519 04:44:39 -- setup/hugepages.sh@94 -- # local anon 00:04:32.519 04:44:39 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:32.519 04:44:39 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:32.519 04:44:39 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:32.519 04:44:39 -- setup/common.sh@18 -- # local node= 00:04:32.519 04:44:39 -- setup/common.sh@19 -- # local var val 00:04:32.519 04:44:39 -- setup/common.sh@20 -- # local mem_f mem 00:04:32.519 04:44:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.519 04:44:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.519 04:44:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.519 04:44:39 -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.519 04:44:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.519 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.519 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.519 04:44:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7552324 kB' 'MemAvailable: 9528440 kB' 'Buffers: 3456 kB' 'Cached: 2187132 kB' 'SwapCached: 0 kB' 'Active: 857956 kB' 'Inactive: 1450016 kB' 'Active(anon): 127872 kB' 'Inactive(anon): 0 kB' 'Active(file): 730084 kB' 'Inactive(file): 1450016 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118580 kB' 'Mapped: 51624 kB' 'Shmem: 10472 kB' 'KReclaimable: 65912 kB' 'Slab: 139244 kB' 'SReclaimable: 65912 kB' 'SUnreclaim: 73332 kB' 'KernelStack: 6524 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 344620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:04:32.519 04:44:39 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.519 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.519 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.519 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.519 04:44:39 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.519 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.519 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.519 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.519 04:44:39 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.519 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.519 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.519 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.519 04:44:39 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.520 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.520 04:44:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.520 04:44:39 -- setup/common.sh@33 -- # echo 0 00:04:32.520 04:44:39 -- setup/common.sh@33 -- # return 0 00:04:32.520 04:44:39 -- setup/hugepages.sh@97 -- # anon=0 00:04:32.521 04:44:39 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:32.521 04:44:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:32.521 04:44:39 -- setup/common.sh@18 -- # local node= 00:04:32.521 04:44:39 -- setup/common.sh@19 -- # local var val 00:04:32.521 04:44:39 -- setup/common.sh@20 -- # local mem_f mem 00:04:32.521 04:44:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.521 04:44:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.521 04:44:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.521 04:44:39 -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.521 04:44:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.521 04:44:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7552084 kB' 'MemAvailable: 9528212 kB' 'Buffers: 3456 kB' 'Cached: 2187136 kB' 'SwapCached: 0 kB' 'Active: 857576 kB' 'Inactive: 1450028 kB' 'Active(anon): 127492 kB' 'Inactive(anon): 0 kB' 'Active(file): 730084 kB' 'Inactive(file): 1450028 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118640 kB' 'Mapped: 51528 kB' 'Shmem: 10472 kB' 'KReclaimable: 65912 kB' 'Slab: 139244 kB' 'SReclaimable: 65912 kB' 'SUnreclaim: 73332 kB' 'KernelStack: 6460 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 344620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.521 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.521 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.522 04:44:39 -- setup/common.sh@33 -- # echo 0 00:04:32.522 04:44:39 -- setup/common.sh@33 -- # return 0 00:04:32.522 04:44:39 -- setup/hugepages.sh@99 -- # surp=0 00:04:32.522 04:44:39 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:32.522 04:44:39 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:32.522 04:44:39 -- setup/common.sh@18 -- # local node= 00:04:32.522 04:44:39 -- setup/common.sh@19 -- # local var val 00:04:32.522 04:44:39 -- setup/common.sh@20 -- # local mem_f mem 00:04:32.522 04:44:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.522 04:44:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.522 04:44:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.522 04:44:39 -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.522 04:44:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.522 04:44:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7552084 kB' 'MemAvailable: 9528216 kB' 'Buffers: 3456 kB' 'Cached: 2187136 kB' 'SwapCached: 0 kB' 'Active: 857360 kB' 'Inactive: 1450032 kB' 'Active(anon): 127276 kB' 'Inactive(anon): 0 kB' 'Active(file): 730084 kB' 'Inactive(file): 1450032 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118444 kB' 'Mapped: 51528 kB' 'Shmem: 10472 kB' 'KReclaimable: 65912 kB' 'Slab: 139072 kB' 'SReclaimable: 65912 kB' 'SUnreclaim: 73160 kB' 'KernelStack: 6476 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 344620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.522 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.522 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.523 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.523 04:44:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.524 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.524 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.524 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.524 04:44:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.524 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.524 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.524 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.524 04:44:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.524 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.524 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.524 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.524 04:44:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.524 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.524 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.524 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.524 04:44:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.524 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.524 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.524 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.524 04:44:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.524 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.524 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.524 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.524 04:44:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.524 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.524 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.524 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.524 04:44:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.524 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.524 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.524 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.524 04:44:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.524 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.524 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.524 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.524 04:44:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.524 04:44:39 -- setup/common.sh@33 -- # echo 0 00:04:32.524 04:44:39 -- setup/common.sh@33 -- # return 0 00:04:32.524 04:44:39 -- setup/hugepages.sh@100 -- # resv=0 00:04:32.524 nr_hugepages=1024 00:04:32.524 04:44:39 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:32.524 resv_hugepages=0 00:04:32.524 04:44:39 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:32.524 surplus_hugepages=0 00:04:32.524 04:44:39 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:32.524 anon_hugepages=0 00:04:32.524 04:44:39 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:32.524 04:44:39 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:32.524 04:44:39 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:32.524 04:44:39 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:32.524 04:44:39 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:32.524 04:44:39 -- setup/common.sh@18 -- # local node= 00:04:32.524 04:44:39 -- setup/common.sh@19 -- # local var val 00:04:32.524 04:44:39 -- setup/common.sh@20 -- # local mem_f mem 00:04:32.524 04:44:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.524 04:44:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.524 04:44:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.524 04:44:39 -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.524 04:44:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.524 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.524 04:44:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7552084 kB' 'MemAvailable: 9528216 kB' 'Buffers: 3456 kB' 'Cached: 2187136 kB' 'SwapCached: 0 kB' 'Active: 857396 kB' 'Inactive: 1450032 kB' 'Active(anon): 127312 kB' 'Inactive(anon): 0 kB' 'Active(file): 730084 kB' 'Inactive(file): 1450032 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118220 kB' 'Mapped: 51528 kB' 'Shmem: 10472 kB' 'KReclaimable: 65912 kB' 'Slab: 139072 kB' 'SReclaimable: 65912 kB' 'SUnreclaim: 73160 kB' 'KernelStack: 6492 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 344620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:04:32.524 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.524 04:44:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.524 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.524 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.524 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.524 04:44:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.524 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.524 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.524 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.524 04:44:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.524 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.524 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.524 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.524 04:44:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.524 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.524 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.524 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.524 04:44:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.524 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.524 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.524 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.524 04:44:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.524 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.524 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.524 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.524 04:44:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.524 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.524 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.524 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.524 04:44:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.524 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.524 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.524 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.524 04:44:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.524 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.524 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.524 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.524 04:44:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.524 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.524 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.524 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.524 04:44:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.524 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.524 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.524 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.524 04:44:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.524 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.524 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.524 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.525 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.525 04:44:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.525 04:44:39 -- setup/common.sh@33 -- # echo 1024 00:04:32.525 04:44:39 -- setup/common.sh@33 -- # return 0 00:04:32.525 04:44:39 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:32.525 04:44:39 -- setup/hugepages.sh@112 -- # get_nodes 00:04:32.525 04:44:39 -- setup/hugepages.sh@27 -- # local node 00:04:32.525 04:44:39 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:32.525 04:44:39 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:32.525 04:44:39 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:32.525 04:44:39 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:32.525 04:44:39 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:32.525 04:44:39 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:32.525 04:44:39 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:32.525 04:44:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:32.525 04:44:39 -- setup/common.sh@18 -- # local node=0 00:04:32.525 04:44:39 -- setup/common.sh@19 -- # local var val 00:04:32.525 04:44:39 -- setup/common.sh@20 -- # local mem_f mem 00:04:32.525 04:44:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.525 04:44:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:32.525 04:44:39 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:32.525 04:44:39 -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.525 04:44:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.526 04:44:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7552084 kB' 'MemUsed: 4689888 kB' 'SwapCached: 0 kB' 'Active: 857368 kB' 'Inactive: 1450032 kB' 'Active(anon): 127284 kB' 'Inactive(anon): 0 kB' 'Active(file): 730084 kB' 'Inactive(file): 1450032 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 2190592 kB' 'Mapped: 51528 kB' 'AnonPages: 118452 kB' 'Shmem: 10472 kB' 'KernelStack: 6476 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 65912 kB' 'Slab: 139072 kB' 'SReclaimable: 65912 kB' 'SUnreclaim: 73160 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.526 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.526 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.527 04:44:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.527 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.527 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.527 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.527 04:44:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.527 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.527 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.527 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.527 04:44:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.527 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.527 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.527 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.527 04:44:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.527 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.527 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.527 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.527 04:44:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.527 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.527 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.527 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.527 04:44:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.527 04:44:39 -- setup/common.sh@32 -- # continue 00:04:32.527 04:44:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.527 04:44:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.527 04:44:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.527 04:44:39 -- setup/common.sh@33 -- # echo 0 00:04:32.527 04:44:39 -- setup/common.sh@33 -- # return 0 00:04:32.527 04:44:39 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:32.527 04:44:39 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:32.527 04:44:39 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:32.527 04:44:39 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:32.527 node0=1024 expecting 1024 00:04:32.527 04:44:39 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:32.527 04:44:39 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:32.527 00:04:32.527 real 0m1.517s 00:04:32.527 user 0m0.694s 00:04:32.527 sys 0m0.799s 00:04:32.527 04:44:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.527 04:44:39 -- common/autotest_common.sh@10 -- # set +x 00:04:32.527 ************************************ 00:04:32.527 END TEST default_setup 00:04:32.527 ************************************ 00:04:32.786 04:44:39 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:32.786 04:44:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:32.786 04:44:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:32.786 04:44:39 -- common/autotest_common.sh@10 -- # set +x 00:04:32.786 ************************************ 00:04:32.786 START TEST per_node_1G_alloc 00:04:32.786 ************************************ 00:04:32.786 04:44:39 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:04:32.786 04:44:39 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:32.786 04:44:39 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:32.786 04:44:39 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:32.786 04:44:39 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:32.786 04:44:39 -- setup/hugepages.sh@51 -- # shift 00:04:32.786 04:44:39 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:32.786 04:44:39 -- setup/hugepages.sh@52 -- # local node_ids 00:04:32.786 04:44:39 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:32.786 04:44:39 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:32.786 04:44:39 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:32.786 04:44:39 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:32.786 04:44:39 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:32.786 04:44:39 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:32.786 04:44:39 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:32.786 04:44:39 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:32.786 04:44:39 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:32.786 04:44:39 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:32.786 04:44:39 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:32.786 04:44:39 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:32.786 04:44:39 -- setup/hugepages.sh@73 -- # return 0 00:04:32.786 04:44:39 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:32.786 04:44:39 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:32.786 04:44:39 -- setup/hugepages.sh@146 -- # setup output 00:04:32.786 04:44:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.786 04:44:39 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:33.045 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:33.326 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:33.326 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:33.326 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:33.326 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:33.326 04:44:40 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:33.326 04:44:40 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:33.326 04:44:40 -- setup/hugepages.sh@89 -- # local node 00:04:33.326 04:44:40 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:33.326 04:44:40 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:33.326 04:44:40 -- setup/hugepages.sh@92 -- # local surp 00:04:33.326 04:44:40 -- setup/hugepages.sh@93 -- # local resv 00:04:33.326 04:44:40 -- setup/hugepages.sh@94 -- # local anon 00:04:33.326 04:44:40 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:33.326 04:44:40 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:33.326 04:44:40 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:33.326 04:44:40 -- setup/common.sh@18 -- # local node= 00:04:33.326 04:44:40 -- setup/common.sh@19 -- # local var val 00:04:33.326 04:44:40 -- setup/common.sh@20 -- # local mem_f mem 00:04:33.326 04:44:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.326 04:44:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.326 04:44:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.326 04:44:40 -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.326 04:44:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.326 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.326 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.326 04:44:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8604984 kB' 'MemAvailable: 10581088 kB' 'Buffers: 3456 kB' 'Cached: 2187136 kB' 'SwapCached: 0 kB' 'Active: 858068 kB' 'Inactive: 1450036 kB' 'Active(anon): 127984 kB' 'Inactive(anon): 0 kB' 'Active(file): 730084 kB' 'Inactive(file): 1450036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 119448 kB' 'Mapped: 51588 kB' 'Shmem: 10472 kB' 'KReclaimable: 65848 kB' 'Slab: 138972 kB' 'SReclaimable: 65848 kB' 'SUnreclaim: 73124 kB' 'KernelStack: 6604 kB' 'PageTables: 4576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 344620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:04:33.326 04:44:40 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.326 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.326 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.326 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.326 04:44:40 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.326 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.326 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.326 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.326 04:44:40 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.326 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.326 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.326 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.326 04:44:40 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.326 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.326 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.326 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.326 04:44:40 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.326 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.326 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.327 04:44:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.327 04:44:40 -- setup/common.sh@33 -- # echo 0 00:04:33.327 04:44:40 -- setup/common.sh@33 -- # return 0 00:04:33.327 04:44:40 -- setup/hugepages.sh@97 -- # anon=0 00:04:33.327 04:44:40 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:33.327 04:44:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:33.327 04:44:40 -- setup/common.sh@18 -- # local node= 00:04:33.327 04:44:40 -- setup/common.sh@19 -- # local var val 00:04:33.327 04:44:40 -- setup/common.sh@20 -- # local mem_f mem 00:04:33.327 04:44:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.327 04:44:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.327 04:44:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.327 04:44:40 -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.327 04:44:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.327 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.328 04:44:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8604984 kB' 'MemAvailable: 10581088 kB' 'Buffers: 3456 kB' 'Cached: 2187136 kB' 'SwapCached: 0 kB' 'Active: 857676 kB' 'Inactive: 1450036 kB' 'Active(anon): 127592 kB' 'Inactive(anon): 0 kB' 'Active(file): 730084 kB' 'Inactive(file): 1450036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118836 kB' 'Mapped: 51536 kB' 'Shmem: 10472 kB' 'KReclaimable: 65848 kB' 'Slab: 138956 kB' 'SReclaimable: 65848 kB' 'SUnreclaim: 73108 kB' 'KernelStack: 6508 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 344620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.328 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.328 04:44:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.329 04:44:40 -- setup/common.sh@33 -- # echo 0 00:04:33.329 04:44:40 -- setup/common.sh@33 -- # return 0 00:04:33.329 04:44:40 -- setup/hugepages.sh@99 -- # surp=0 00:04:33.329 04:44:40 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:33.329 04:44:40 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:33.329 04:44:40 -- setup/common.sh@18 -- # local node= 00:04:33.329 04:44:40 -- setup/common.sh@19 -- # local var val 00:04:33.329 04:44:40 -- setup/common.sh@20 -- # local mem_f mem 00:04:33.329 04:44:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.329 04:44:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.329 04:44:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.329 04:44:40 -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.329 04:44:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.329 04:44:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8605420 kB' 'MemAvailable: 10581524 kB' 'Buffers: 3456 kB' 'Cached: 2187136 kB' 'SwapCached: 0 kB' 'Active: 857344 kB' 'Inactive: 1450036 kB' 'Active(anon): 127260 kB' 'Inactive(anon): 0 kB' 'Active(file): 730084 kB' 'Inactive(file): 1450036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118656 kB' 'Mapped: 51416 kB' 'Shmem: 10472 kB' 'KReclaimable: 65848 kB' 'Slab: 138956 kB' 'SReclaimable: 65848 kB' 'SUnreclaim: 73108 kB' 'KernelStack: 6476 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 344620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.329 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.329 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.330 04:44:40 -- setup/common.sh@33 -- # echo 0 00:04:33.330 04:44:40 -- setup/common.sh@33 -- # return 0 00:04:33.330 04:44:40 -- setup/hugepages.sh@100 -- # resv=0 00:04:33.330 nr_hugepages=512 00:04:33.330 04:44:40 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:33.330 resv_hugepages=0 00:04:33.330 surplus_hugepages=0 00:04:33.330 04:44:40 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:33.330 04:44:40 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:33.330 anon_hugepages=0 00:04:33.330 04:44:40 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:33.330 04:44:40 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:33.330 04:44:40 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:33.330 04:44:40 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:33.330 04:44:40 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:33.330 04:44:40 -- setup/common.sh@18 -- # local node= 00:04:33.330 04:44:40 -- setup/common.sh@19 -- # local var val 00:04:33.330 04:44:40 -- setup/common.sh@20 -- # local mem_f mem 00:04:33.330 04:44:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.330 04:44:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.330 04:44:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.330 04:44:40 -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.330 04:44:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.330 04:44:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8605420 kB' 'MemAvailable: 10581524 kB' 'Buffers: 3456 kB' 'Cached: 2187136 kB' 'SwapCached: 0 kB' 'Active: 857200 kB' 'Inactive: 1450036 kB' 'Active(anon): 127116 kB' 'Inactive(anon): 0 kB' 'Active(file): 730084 kB' 'Inactive(file): 1450036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118508 kB' 'Mapped: 51524 kB' 'Shmem: 10472 kB' 'KReclaimable: 65848 kB' 'Slab: 138984 kB' 'SReclaimable: 65848 kB' 'SUnreclaim: 73136 kB' 'KernelStack: 6464 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 344620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.330 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.330 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.331 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.331 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.332 04:44:40 -- setup/common.sh@33 -- # echo 512 00:04:33.332 04:44:40 -- setup/common.sh@33 -- # return 0 00:04:33.332 04:44:40 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:33.332 04:44:40 -- setup/hugepages.sh@112 -- # get_nodes 00:04:33.332 04:44:40 -- setup/hugepages.sh@27 -- # local node 00:04:33.332 04:44:40 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:33.332 04:44:40 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:33.332 04:44:40 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:33.332 04:44:40 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:33.332 04:44:40 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:33.332 04:44:40 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:33.332 04:44:40 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:33.332 04:44:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:33.332 04:44:40 -- setup/common.sh@18 -- # local node=0 00:04:33.332 04:44:40 -- setup/common.sh@19 -- # local var val 00:04:33.332 04:44:40 -- setup/common.sh@20 -- # local mem_f mem 00:04:33.332 04:44:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.332 04:44:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:33.332 04:44:40 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:33.332 04:44:40 -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.332 04:44:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.332 04:44:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8605420 kB' 'MemUsed: 3636552 kB' 'SwapCached: 0 kB' 'Active: 857140 kB' 'Inactive: 1450036 kB' 'Active(anon): 127056 kB' 'Inactive(anon): 0 kB' 'Active(file): 730084 kB' 'Inactive(file): 1450036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 2190592 kB' 'Mapped: 51524 kB' 'AnonPages: 118412 kB' 'Shmem: 10472 kB' 'KernelStack: 6448 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 65848 kB' 'Slab: 138984 kB' 'SReclaimable: 65848 kB' 'SUnreclaim: 73136 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.332 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.332 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.333 04:44:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.333 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.333 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.333 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.333 04:44:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.333 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.333 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.333 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.333 04:44:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.333 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.333 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.333 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.333 04:44:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.333 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.333 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.333 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.333 04:44:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.333 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.333 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.333 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.333 04:44:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.333 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.333 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.333 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.333 04:44:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.333 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.333 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.333 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.333 04:44:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.333 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.333 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.333 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.333 04:44:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.333 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.333 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.333 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.333 04:44:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.333 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.333 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.333 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.333 04:44:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.333 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.333 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.333 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.333 04:44:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.333 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.333 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.333 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.333 04:44:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.333 04:44:40 -- setup/common.sh@33 -- # echo 0 00:04:33.333 04:44:40 -- setup/common.sh@33 -- # return 0 00:04:33.333 04:44:40 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:33.333 04:44:40 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:33.333 04:44:40 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:33.333 04:44:40 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:33.333 node0=512 expecting 512 00:04:33.333 04:44:40 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:33.333 04:44:40 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:33.333 00:04:33.333 real 0m0.699s 00:04:33.333 user 0m0.326s 00:04:33.333 sys 0m0.420s 00:04:33.333 04:44:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.333 04:44:40 -- common/autotest_common.sh@10 -- # set +x 00:04:33.333 ************************************ 00:04:33.333 END TEST per_node_1G_alloc 00:04:33.333 ************************************ 00:04:33.333 04:44:40 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:33.333 04:44:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:33.333 04:44:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:33.333 04:44:40 -- common/autotest_common.sh@10 -- # set +x 00:04:33.333 ************************************ 00:04:33.333 START TEST even_2G_alloc 00:04:33.333 ************************************ 00:04:33.333 04:44:40 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:04:33.333 04:44:40 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:33.333 04:44:40 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:33.333 04:44:40 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:33.333 04:44:40 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:33.333 04:44:40 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:33.333 04:44:40 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:33.333 04:44:40 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:33.333 04:44:40 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:33.333 04:44:40 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:33.333 04:44:40 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:33.333 04:44:40 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:33.333 04:44:40 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:33.333 04:44:40 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:33.333 04:44:40 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:33.333 04:44:40 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:33.333 04:44:40 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:33.333 04:44:40 -- setup/hugepages.sh@83 -- # : 0 00:04:33.333 04:44:40 -- setup/hugepages.sh@84 -- # : 0 00:04:33.333 04:44:40 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:33.333 04:44:40 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:33.333 04:44:40 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:33.333 04:44:40 -- setup/hugepages.sh@153 -- # setup output 00:04:33.333 04:44:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:33.333 04:44:40 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:33.905 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:33.905 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:33.905 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:33.905 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:33.905 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:33.905 04:44:40 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:33.905 04:44:40 -- setup/hugepages.sh@89 -- # local node 00:04:33.905 04:44:40 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:33.905 04:44:40 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:33.905 04:44:40 -- setup/hugepages.sh@92 -- # local surp 00:04:33.905 04:44:40 -- setup/hugepages.sh@93 -- # local resv 00:04:33.905 04:44:40 -- setup/hugepages.sh@94 -- # local anon 00:04:33.905 04:44:40 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:33.905 04:44:40 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:33.905 04:44:40 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:33.905 04:44:40 -- setup/common.sh@18 -- # local node= 00:04:33.905 04:44:40 -- setup/common.sh@19 -- # local var val 00:04:33.905 04:44:40 -- setup/common.sh@20 -- # local mem_f mem 00:04:33.905 04:44:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.905 04:44:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.906 04:44:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.906 04:44:40 -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.906 04:44:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.906 04:44:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7558712 kB' 'MemAvailable: 9534816 kB' 'Buffers: 3456 kB' 'Cached: 2187136 kB' 'SwapCached: 0 kB' 'Active: 858600 kB' 'Inactive: 1450036 kB' 'Active(anon): 128516 kB' 'Inactive(anon): 0 kB' 'Active(file): 730084 kB' 'Inactive(file): 1450036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 119736 kB' 'Mapped: 51964 kB' 'Shmem: 10472 kB' 'KReclaimable: 65848 kB' 'Slab: 138944 kB' 'SReclaimable: 65848 kB' 'SUnreclaim: 73096 kB' 'KernelStack: 6568 kB' 'PageTables: 4688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 344620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.906 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.906 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.907 04:44:40 -- setup/common.sh@33 -- # echo 0 00:04:33.907 04:44:40 -- setup/common.sh@33 -- # return 0 00:04:33.907 04:44:40 -- setup/hugepages.sh@97 -- # anon=0 00:04:33.907 04:44:40 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:33.907 04:44:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:33.907 04:44:40 -- setup/common.sh@18 -- # local node= 00:04:33.907 04:44:40 -- setup/common.sh@19 -- # local var val 00:04:33.907 04:44:40 -- setup/common.sh@20 -- # local mem_f mem 00:04:33.907 04:44:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.907 04:44:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.907 04:44:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.907 04:44:40 -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.907 04:44:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.907 04:44:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7558488 kB' 'MemAvailable: 9534592 kB' 'Buffers: 3456 kB' 'Cached: 2187136 kB' 'SwapCached: 0 kB' 'Active: 857416 kB' 'Inactive: 1450036 kB' 'Active(anon): 127332 kB' 'Inactive(anon): 0 kB' 'Active(file): 730084 kB' 'Inactive(file): 1450036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118432 kB' 'Mapped: 51524 kB' 'Shmem: 10472 kB' 'KReclaimable: 65848 kB' 'Slab: 139016 kB' 'SReclaimable: 65848 kB' 'SUnreclaim: 73168 kB' 'KernelStack: 6448 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 344620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.907 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.907 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.908 04:44:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.908 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.908 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.908 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.908 04:44:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.908 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.908 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.908 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.908 04:44:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.908 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.908 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.908 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.908 04:44:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.908 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.908 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.908 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.908 04:44:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.908 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.908 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.908 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.908 04:44:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.908 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.908 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.908 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.908 04:44:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.908 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.908 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.908 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.908 04:44:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.908 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.908 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.908 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.908 04:44:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.908 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.908 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.908 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.908 04:44:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.908 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.908 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.908 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.908 04:44:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.908 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.908 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.908 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.908 04:44:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.908 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.908 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.908 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.908 04:44:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.908 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.908 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.908 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.908 04:44:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.908 04:44:40 -- setup/common.sh@32 -- # continue 00:04:33.908 04:44:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.908 04:44:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.908 04:44:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.908 04:44:41 -- setup/common.sh@32 -- # continue 00:04:33.908 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.908 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.908 04:44:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.908 04:44:41 -- setup/common.sh@32 -- # continue 00:04:33.908 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.908 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.908 04:44:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.908 04:44:41 -- setup/common.sh@32 -- # continue 00:04:33.908 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.908 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.908 04:44:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.908 04:44:41 -- setup/common.sh@32 -- # continue 00:04:33.908 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.908 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.908 04:44:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.908 04:44:41 -- setup/common.sh@32 -- # continue 00:04:33.908 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.908 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.908 04:44:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.908 04:44:41 -- setup/common.sh@32 -- # continue 00:04:33.908 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.908 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.908 04:44:41 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.908 04:44:41 -- setup/common.sh@32 -- # continue 00:04:33.908 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.908 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.908 04:44:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.908 04:44:41 -- setup/common.sh@33 -- # echo 0 00:04:33.908 04:44:41 -- setup/common.sh@33 -- # return 0 00:04:33.908 04:44:41 -- setup/hugepages.sh@99 -- # surp=0 00:04:33.908 04:44:41 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:33.908 04:44:41 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:33.908 04:44:41 -- setup/common.sh@18 -- # local node= 00:04:33.908 04:44:41 -- setup/common.sh@19 -- # local var val 00:04:33.908 04:44:41 -- setup/common.sh@20 -- # local mem_f mem 00:04:33.908 04:44:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.908 04:44:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.908 04:44:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.908 04:44:41 -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.908 04:44:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.908 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.908 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.908 04:44:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7558488 kB' 'MemAvailable: 9534592 kB' 'Buffers: 3456 kB' 'Cached: 2187136 kB' 'SwapCached: 0 kB' 'Active: 857632 kB' 'Inactive: 1450036 kB' 'Active(anon): 127548 kB' 'Inactive(anon): 0 kB' 'Active(file): 730084 kB' 'Inactive(file): 1450036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118648 kB' 'Mapped: 51524 kB' 'Shmem: 10472 kB' 'KReclaimable: 65848 kB' 'Slab: 139012 kB' 'SReclaimable: 65848 kB' 'SUnreclaim: 73164 kB' 'KernelStack: 6432 kB' 'PageTables: 4116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 344620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:04:33.908 04:44:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.908 04:44:41 -- setup/common.sh@32 -- # continue 00:04:33.908 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.908 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.908 04:44:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.908 04:44:41 -- setup/common.sh@32 -- # continue 00:04:33.908 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.908 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.908 04:44:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.908 04:44:41 -- setup/common.sh@32 -- # continue 00:04:33.908 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.908 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.908 04:44:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.908 04:44:41 -- setup/common.sh@32 -- # continue 00:04:33.908 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.908 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.908 04:44:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.908 04:44:41 -- setup/common.sh@32 -- # continue 00:04:33.908 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.908 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.908 04:44:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.908 04:44:41 -- setup/common.sh@32 -- # continue 00:04:33.908 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.908 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.908 04:44:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.908 04:44:41 -- setup/common.sh@32 -- # continue 00:04:33.908 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.908 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.908 04:44:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.908 04:44:41 -- setup/common.sh@32 -- # continue 00:04:33.908 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.908 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.908 04:44:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.908 04:44:41 -- setup/common.sh@32 -- # continue 00:04:33.908 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.908 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.908 04:44:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.908 04:44:41 -- setup/common.sh@32 -- # continue 00:04:33.908 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.908 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.908 04:44:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.908 04:44:41 -- setup/common.sh@32 -- # continue 00:04:33.908 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.908 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.908 04:44:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.908 04:44:41 -- setup/common.sh@32 -- # continue 00:04:33.908 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.908 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.908 04:44:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.908 04:44:41 -- setup/common.sh@32 -- # continue 00:04:33.908 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.909 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.909 04:44:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.909 04:44:41 -- setup/common.sh@32 -- # continue 00:04:33.909 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.909 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.909 04:44:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.909 04:44:41 -- setup/common.sh@32 -- # continue 00:04:33.909 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.909 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.909 04:44:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.909 04:44:41 -- setup/common.sh@32 -- # continue 00:04:33.909 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.909 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.909 04:44:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.909 04:44:41 -- setup/common.sh@32 -- # continue 00:04:33.909 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.909 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.909 04:44:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.909 04:44:41 -- setup/common.sh@32 -- # continue 00:04:33.909 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.909 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.909 04:44:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.909 04:44:41 -- setup/common.sh@32 -- # continue 00:04:33.909 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.909 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.909 04:44:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.909 04:44:41 -- setup/common.sh@32 -- # continue 00:04:33.909 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.909 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.909 04:44:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.909 04:44:41 -- setup/common.sh@32 -- # continue 00:04:33.909 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.909 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.909 04:44:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.909 04:44:41 -- setup/common.sh@32 -- # continue 00:04:33.909 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.909 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.909 04:44:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.909 04:44:41 -- setup/common.sh@32 -- # continue 00:04:33.909 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.909 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.909 04:44:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.909 04:44:41 -- setup/common.sh@32 -- # continue 00:04:33.909 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.909 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.909 04:44:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.909 04:44:41 -- setup/common.sh@32 -- # continue 00:04:33.909 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.909 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.909 04:44:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.909 04:44:41 -- setup/common.sh@32 -- # continue 00:04:33.909 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.909 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.909 04:44:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.909 04:44:41 -- setup/common.sh@32 -- # continue 00:04:33.909 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.909 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.909 04:44:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.909 04:44:41 -- setup/common.sh@32 -- # continue 00:04:33.909 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.909 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.909 04:44:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.909 04:44:41 -- setup/common.sh@32 -- # continue 00:04:33.909 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.909 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.909 04:44:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.909 04:44:41 -- setup/common.sh@32 -- # continue 00:04:33.909 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.909 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.909 04:44:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.909 04:44:41 -- setup/common.sh@32 -- # continue 00:04:33.909 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.909 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.909 04:44:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.909 04:44:41 -- setup/common.sh@32 -- # continue 00:04:33.909 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.170 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.170 04:44:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.170 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.170 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.170 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.170 04:44:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.170 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.170 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.170 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.170 04:44:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.170 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.170 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.170 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.170 04:44:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.170 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.170 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.170 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.170 04:44:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.170 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.170 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.170 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.170 04:44:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.170 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.170 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.170 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.170 04:44:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.170 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.170 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.170 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.170 04:44:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.170 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.170 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.170 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.170 04:44:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.170 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.170 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.170 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.170 04:44:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.170 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.170 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.170 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.170 04:44:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.170 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.170 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.170 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.170 04:44:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.170 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.170 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.170 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.170 04:44:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.170 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.170 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.170 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.170 04:44:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.170 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.170 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.170 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.170 04:44:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.170 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.170 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.170 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.170 04:44:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.170 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.170 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.170 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.170 04:44:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.170 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.170 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.170 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.170 04:44:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.170 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.170 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.170 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.170 04:44:41 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.170 04:44:41 -- setup/common.sh@33 -- # echo 0 00:04:34.170 04:44:41 -- setup/common.sh@33 -- # return 0 00:04:34.170 04:44:41 -- setup/hugepages.sh@100 -- # resv=0 00:04:34.170 nr_hugepages=1024 00:04:34.170 04:44:41 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:34.170 resv_hugepages=0 00:04:34.170 04:44:41 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:34.170 surplus_hugepages=0 00:04:34.170 04:44:41 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:34.170 anon_hugepages=0 00:04:34.170 04:44:41 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:34.170 04:44:41 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:34.170 04:44:41 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:34.170 04:44:41 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:34.170 04:44:41 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:34.170 04:44:41 -- setup/common.sh@18 -- # local node= 00:04:34.170 04:44:41 -- setup/common.sh@19 -- # local var val 00:04:34.170 04:44:41 -- setup/common.sh@20 -- # local mem_f mem 00:04:34.170 04:44:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.170 04:44:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.170 04:44:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.170 04:44:41 -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.170 04:44:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.170 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.170 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.171 04:44:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7558488 kB' 'MemAvailable: 9534592 kB' 'Buffers: 3456 kB' 'Cached: 2187136 kB' 'SwapCached: 0 kB' 'Active: 857224 kB' 'Inactive: 1450036 kB' 'Active(anon): 127140 kB' 'Inactive(anon): 0 kB' 'Active(file): 730084 kB' 'Inactive(file): 1450036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118288 kB' 'Mapped: 51524 kB' 'Shmem: 10472 kB' 'KReclaimable: 65848 kB' 'Slab: 139008 kB' 'SReclaimable: 65848 kB' 'SUnreclaim: 73160 kB' 'KernelStack: 6464 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 344620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.171 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.171 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.172 04:44:41 -- setup/common.sh@33 -- # echo 1024 00:04:34.172 04:44:41 -- setup/common.sh@33 -- # return 0 00:04:34.172 04:44:41 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:34.172 04:44:41 -- setup/hugepages.sh@112 -- # get_nodes 00:04:34.172 04:44:41 -- setup/hugepages.sh@27 -- # local node 00:04:34.172 04:44:41 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:34.172 04:44:41 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:34.172 04:44:41 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:34.172 04:44:41 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:34.172 04:44:41 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:34.172 04:44:41 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:34.172 04:44:41 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:34.172 04:44:41 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.172 04:44:41 -- setup/common.sh@18 -- # local node=0 00:04:34.172 04:44:41 -- setup/common.sh@19 -- # local var val 00:04:34.172 04:44:41 -- setup/common.sh@20 -- # local mem_f mem 00:04:34.172 04:44:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.172 04:44:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:34.172 04:44:41 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:34.172 04:44:41 -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.172 04:44:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.172 04:44:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7558488 kB' 'MemUsed: 4683484 kB' 'SwapCached: 0 kB' 'Active: 857520 kB' 'Inactive: 1450036 kB' 'Active(anon): 127436 kB' 'Inactive(anon): 0 kB' 'Active(file): 730084 kB' 'Inactive(file): 1450036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 2190592 kB' 'Mapped: 51524 kB' 'AnonPages: 118584 kB' 'Shmem: 10472 kB' 'KernelStack: 6480 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 65848 kB' 'Slab: 139008 kB' 'SReclaimable: 65848 kB' 'SUnreclaim: 73160 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.172 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.172 04:44:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.173 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.173 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.173 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.173 04:44:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.173 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.173 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.173 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.173 04:44:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.173 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.173 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.173 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.173 04:44:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.173 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.173 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.173 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.173 04:44:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.173 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.173 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.173 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.173 04:44:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.173 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.173 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.173 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.173 04:44:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.173 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.173 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.173 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.173 04:44:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.173 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.173 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.173 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.173 04:44:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.173 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.173 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.173 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.173 04:44:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.173 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.173 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.173 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.173 04:44:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.173 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.173 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.173 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.173 04:44:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.173 04:44:41 -- setup/common.sh@33 -- # echo 0 00:04:34.173 04:44:41 -- setup/common.sh@33 -- # return 0 00:04:34.173 04:44:41 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:34.173 04:44:41 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:34.173 04:44:41 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:34.173 04:44:41 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:34.173 node0=1024 expecting 1024 00:04:34.173 04:44:41 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:34.173 04:44:41 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:34.173 00:04:34.173 real 0m0.669s 00:04:34.173 user 0m0.307s 00:04:34.173 sys 0m0.411s 00:04:34.173 04:44:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.173 04:44:41 -- common/autotest_common.sh@10 -- # set +x 00:04:34.173 ************************************ 00:04:34.173 END TEST even_2G_alloc 00:04:34.173 ************************************ 00:04:34.173 04:44:41 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:34.173 04:44:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:34.173 04:44:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:34.173 04:44:41 -- common/autotest_common.sh@10 -- # set +x 00:04:34.173 ************************************ 00:04:34.173 START TEST odd_alloc 00:04:34.173 ************************************ 00:04:34.173 04:44:41 -- common/autotest_common.sh@1104 -- # odd_alloc 00:04:34.173 04:44:41 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:34.173 04:44:41 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:34.173 04:44:41 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:34.173 04:44:41 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:34.173 04:44:41 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:34.173 04:44:41 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:34.173 04:44:41 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:34.173 04:44:41 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:34.173 04:44:41 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:34.173 04:44:41 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:34.173 04:44:41 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:34.173 04:44:41 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:34.173 04:44:41 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:34.173 04:44:41 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:34.173 04:44:41 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:34.173 04:44:41 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:34.173 04:44:41 -- setup/hugepages.sh@83 -- # : 0 00:04:34.173 04:44:41 -- setup/hugepages.sh@84 -- # : 0 00:04:34.173 04:44:41 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:34.173 04:44:41 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:34.173 04:44:41 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:34.173 04:44:41 -- setup/hugepages.sh@160 -- # setup output 00:04:34.173 04:44:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.173 04:44:41 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:34.744 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:34.745 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:34.745 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:34.745 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:34.745 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:34.745 04:44:41 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:34.745 04:44:41 -- setup/hugepages.sh@89 -- # local node 00:04:34.745 04:44:41 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:34.745 04:44:41 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:34.745 04:44:41 -- setup/hugepages.sh@92 -- # local surp 00:04:34.745 04:44:41 -- setup/hugepages.sh@93 -- # local resv 00:04:34.745 04:44:41 -- setup/hugepages.sh@94 -- # local anon 00:04:34.745 04:44:41 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:34.745 04:44:41 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:34.745 04:44:41 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:34.745 04:44:41 -- setup/common.sh@18 -- # local node= 00:04:34.745 04:44:41 -- setup/common.sh@19 -- # local var val 00:04:34.745 04:44:41 -- setup/common.sh@20 -- # local mem_f mem 00:04:34.745 04:44:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.745 04:44:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.745 04:44:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.745 04:44:41 -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.745 04:44:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.745 04:44:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7549360 kB' 'MemAvailable: 9525464 kB' 'Buffers: 3456 kB' 'Cached: 2187136 kB' 'SwapCached: 0 kB' 'Active: 857636 kB' 'Inactive: 1450036 kB' 'Active(anon): 127552 kB' 'Inactive(anon): 0 kB' 'Active(file): 730084 kB' 'Inactive(file): 1450036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118644 kB' 'Mapped: 51540 kB' 'Shmem: 10472 kB' 'KReclaimable: 65848 kB' 'Slab: 139004 kB' 'SReclaimable: 65848 kB' 'SUnreclaim: 73156 kB' 'KernelStack: 6520 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 344620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.745 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.745 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.746 04:44:41 -- setup/common.sh@33 -- # echo 0 00:04:34.746 04:44:41 -- setup/common.sh@33 -- # return 0 00:04:34.746 04:44:41 -- setup/hugepages.sh@97 -- # anon=0 00:04:34.746 04:44:41 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:34.746 04:44:41 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.746 04:44:41 -- setup/common.sh@18 -- # local node= 00:04:34.746 04:44:41 -- setup/common.sh@19 -- # local var val 00:04:34.746 04:44:41 -- setup/common.sh@20 -- # local mem_f mem 00:04:34.746 04:44:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.746 04:44:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.746 04:44:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.746 04:44:41 -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.746 04:44:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.746 04:44:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7549360 kB' 'MemAvailable: 9525464 kB' 'Buffers: 3456 kB' 'Cached: 2187136 kB' 'SwapCached: 0 kB' 'Active: 857860 kB' 'Inactive: 1450036 kB' 'Active(anon): 127776 kB' 'Inactive(anon): 0 kB' 'Active(file): 730084 kB' 'Inactive(file): 1450036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118904 kB' 'Mapped: 51540 kB' 'Shmem: 10472 kB' 'KReclaimable: 65848 kB' 'Slab: 138996 kB' 'SReclaimable: 65848 kB' 'SUnreclaim: 73148 kB' 'KernelStack: 6520 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 344620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.746 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.746 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.747 04:44:41 -- setup/common.sh@33 -- # echo 0 00:04:34.747 04:44:41 -- setup/common.sh@33 -- # return 0 00:04:34.747 04:44:41 -- setup/hugepages.sh@99 -- # surp=0 00:04:34.747 04:44:41 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:34.747 04:44:41 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:34.747 04:44:41 -- setup/common.sh@18 -- # local node= 00:04:34.747 04:44:41 -- setup/common.sh@19 -- # local var val 00:04:34.747 04:44:41 -- setup/common.sh@20 -- # local mem_f mem 00:04:34.747 04:44:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.747 04:44:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.747 04:44:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.747 04:44:41 -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.747 04:44:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.747 04:44:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7549108 kB' 'MemAvailable: 9525212 kB' 'Buffers: 3456 kB' 'Cached: 2187136 kB' 'SwapCached: 0 kB' 'Active: 857588 kB' 'Inactive: 1450036 kB' 'Active(anon): 127504 kB' 'Inactive(anon): 0 kB' 'Active(file): 730084 kB' 'Inactive(file): 1450036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118860 kB' 'Mapped: 51588 kB' 'Shmem: 10472 kB' 'KReclaimable: 65848 kB' 'Slab: 139016 kB' 'SReclaimable: 65848 kB' 'SUnreclaim: 73168 kB' 'KernelStack: 6524 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 344620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.747 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.747 04:44:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.748 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.748 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.749 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.749 04:44:41 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.749 04:44:41 -- setup/common.sh@33 -- # echo 0 00:04:34.749 04:44:41 -- setup/common.sh@33 -- # return 0 00:04:34.749 04:44:41 -- setup/hugepages.sh@100 -- # resv=0 00:04:34.749 nr_hugepages=1025 00:04:34.749 04:44:41 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:34.749 resv_hugepages=0 00:04:34.749 04:44:41 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:34.749 surplus_hugepages=0 00:04:34.749 04:44:41 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:34.749 anon_hugepages=0 00:04:34.749 04:44:41 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:34.749 04:44:41 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:34.749 04:44:41 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:34.749 04:44:41 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:34.749 04:44:41 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:34.749 04:44:41 -- setup/common.sh@18 -- # local node= 00:04:34.749 04:44:41 -- setup/common.sh@19 -- # local var val 00:04:34.749 04:44:41 -- setup/common.sh@20 -- # local mem_f mem 00:04:34.749 04:44:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.749 04:44:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.749 04:44:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.749 04:44:41 -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.749 04:44:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.749 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.749 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.749 04:44:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7549108 kB' 'MemAvailable: 9525212 kB' 'Buffers: 3456 kB' 'Cached: 2187136 kB' 'SwapCached: 0 kB' 'Active: 857828 kB' 'Inactive: 1450036 kB' 'Active(anon): 127744 kB' 'Inactive(anon): 0 kB' 'Active(file): 730084 kB' 'Inactive(file): 1450036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118888 kB' 'Mapped: 51588 kB' 'Shmem: 10472 kB' 'KReclaimable: 65848 kB' 'Slab: 139016 kB' 'SReclaimable: 65848 kB' 'SUnreclaim: 73168 kB' 'KernelStack: 6540 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 344620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:04:34.749 04:44:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.749 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.749 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.749 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.749 04:44:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.749 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.749 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.749 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.749 04:44:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.749 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.749 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.749 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.749 04:44:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.749 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.749 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.749 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.749 04:44:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.749 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.749 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.749 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.749 04:44:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.749 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.749 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.749 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.749 04:44:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.749 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.749 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.749 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.749 04:44:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.749 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.749 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.749 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.749 04:44:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.749 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.749 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.749 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.749 04:44:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.749 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.749 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.749 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.749 04:44:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.749 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.749 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.749 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.749 04:44:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.749 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.749 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.749 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.749 04:44:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.749 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.749 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.749 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.749 04:44:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.749 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.749 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.749 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.749 04:44:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.749 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.749 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.749 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.749 04:44:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.749 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.749 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.749 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.749 04:44:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.749 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.749 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.749 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.749 04:44:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.749 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.749 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.749 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.749 04:44:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.749 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.749 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.749 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.749 04:44:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.749 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.749 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.749 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.749 04:44:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.749 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.749 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.749 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.749 04:44:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.749 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.750 04:44:41 -- setup/common.sh@33 -- # echo 1025 00:04:34.750 04:44:41 -- setup/common.sh@33 -- # return 0 00:04:34.750 04:44:41 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:34.750 04:44:41 -- setup/hugepages.sh@112 -- # get_nodes 00:04:34.750 04:44:41 -- setup/hugepages.sh@27 -- # local node 00:04:34.750 04:44:41 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:34.750 04:44:41 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:34.750 04:44:41 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:34.750 04:44:41 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:34.750 04:44:41 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:34.750 04:44:41 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:34.750 04:44:41 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:34.750 04:44:41 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.750 04:44:41 -- setup/common.sh@18 -- # local node=0 00:04:34.750 04:44:41 -- setup/common.sh@19 -- # local var val 00:04:34.750 04:44:41 -- setup/common.sh@20 -- # local mem_f mem 00:04:34.750 04:44:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.750 04:44:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:34.750 04:44:41 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:34.750 04:44:41 -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.750 04:44:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.750 04:44:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7549108 kB' 'MemUsed: 4692864 kB' 'SwapCached: 0 kB' 'Active: 857876 kB' 'Inactive: 1450036 kB' 'Active(anon): 127792 kB' 'Inactive(anon): 0 kB' 'Active(file): 730084 kB' 'Inactive(file): 1450036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 2190592 kB' 'Mapped: 51588 kB' 'AnonPages: 118896 kB' 'Shmem: 10472 kB' 'KernelStack: 6540 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 65848 kB' 'Slab: 139016 kB' 'SReclaimable: 65848 kB' 'SUnreclaim: 73168 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.750 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.750 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # continue 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.751 04:44:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.751 04:44:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.751 04:44:41 -- setup/common.sh@33 -- # echo 0 00:04:34.751 04:44:41 -- setup/common.sh@33 -- # return 0 00:04:34.751 04:44:41 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:34.751 04:44:41 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:34.751 04:44:41 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:34.751 04:44:41 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:34.751 node0=1025 expecting 1025 00:04:34.751 04:44:41 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:34.751 04:44:41 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:34.751 00:04:34.751 real 0m0.677s 00:04:34.751 user 0m0.319s 00:04:34.751 sys 0m0.405s 00:04:34.751 04:44:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.751 04:44:41 -- common/autotest_common.sh@10 -- # set +x 00:04:34.751 ************************************ 00:04:34.751 END TEST odd_alloc 00:04:34.751 ************************************ 00:04:34.751 04:44:41 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:34.751 04:44:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:34.751 04:44:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:34.751 04:44:41 -- common/autotest_common.sh@10 -- # set +x 00:04:34.751 ************************************ 00:04:34.751 START TEST custom_alloc 00:04:34.751 ************************************ 00:04:34.751 04:44:41 -- common/autotest_common.sh@1104 -- # custom_alloc 00:04:34.751 04:44:41 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:34.751 04:44:41 -- setup/hugepages.sh@169 -- # local node 00:04:34.751 04:44:41 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:35.011 04:44:41 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:35.011 04:44:41 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:35.012 04:44:41 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:35.012 04:44:41 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:35.012 04:44:41 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:35.012 04:44:41 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:35.012 04:44:41 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:35.012 04:44:41 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:35.012 04:44:41 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:35.012 04:44:41 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:35.012 04:44:41 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:35.012 04:44:41 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:35.012 04:44:41 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:35.012 04:44:41 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:35.012 04:44:41 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:35.012 04:44:41 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:35.012 04:44:41 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:35.012 04:44:41 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:35.012 04:44:41 -- setup/hugepages.sh@83 -- # : 0 00:04:35.012 04:44:41 -- setup/hugepages.sh@84 -- # : 0 00:04:35.012 04:44:41 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:35.012 04:44:41 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:35.012 04:44:41 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:35.012 04:44:41 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:35.012 04:44:41 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:35.012 04:44:41 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:35.012 04:44:41 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:35.012 04:44:41 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:35.012 04:44:41 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:35.012 04:44:41 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:35.012 04:44:41 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:35.012 04:44:41 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:35.012 04:44:41 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:35.012 04:44:41 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:35.012 04:44:41 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:35.012 04:44:41 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:35.012 04:44:41 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:35.012 04:44:41 -- setup/hugepages.sh@78 -- # return 0 00:04:35.012 04:44:41 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:35.012 04:44:41 -- setup/hugepages.sh@187 -- # setup output 00:04:35.012 04:44:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.012 04:44:41 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:35.272 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:35.272 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:35.272 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:35.272 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:35.272 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:35.535 04:44:42 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:35.535 04:44:42 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:35.535 04:44:42 -- setup/hugepages.sh@89 -- # local node 00:04:35.535 04:44:42 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:35.535 04:44:42 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:35.535 04:44:42 -- setup/hugepages.sh@92 -- # local surp 00:04:35.535 04:44:42 -- setup/hugepages.sh@93 -- # local resv 00:04:35.535 04:44:42 -- setup/hugepages.sh@94 -- # local anon 00:04:35.535 04:44:42 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:35.535 04:44:42 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:35.535 04:44:42 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:35.535 04:44:42 -- setup/common.sh@18 -- # local node= 00:04:35.535 04:44:42 -- setup/common.sh@19 -- # local var val 00:04:35.535 04:44:42 -- setup/common.sh@20 -- # local mem_f mem 00:04:35.535 04:44:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.535 04:44:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.535 04:44:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.535 04:44:42 -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.535 04:44:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.535 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.535 04:44:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8598128 kB' 'MemAvailable: 10574232 kB' 'Buffers: 3456 kB' 'Cached: 2187136 kB' 'SwapCached: 0 kB' 'Active: 858152 kB' 'Inactive: 1450036 kB' 'Active(anon): 128068 kB' 'Inactive(anon): 0 kB' 'Active(file): 730084 kB' 'Inactive(file): 1450036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118952 kB' 'Mapped: 51572 kB' 'Shmem: 10472 kB' 'KReclaimable: 65848 kB' 'Slab: 138996 kB' 'SReclaimable: 65848 kB' 'SUnreclaim: 73148 kB' 'KernelStack: 6492 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 344620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:04:35.535 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.535 04:44:42 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.535 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.535 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.535 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.535 04:44:42 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.535 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.535 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.535 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.535 04:44:42 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.535 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.535 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.535 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.535 04:44:42 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.535 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.535 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.535 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.535 04:44:42 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.535 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.535 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.535 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.535 04:44:42 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.535 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.535 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.535 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.535 04:44:42 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.535 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.535 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.535 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.535 04:44:42 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.535 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.535 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.535 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.535 04:44:42 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.535 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.535 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.535 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.535 04:44:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.535 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.535 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.535 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.535 04:44:42 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.536 04:44:42 -- setup/common.sh@33 -- # echo 0 00:04:35.536 04:44:42 -- setup/common.sh@33 -- # return 0 00:04:35.536 04:44:42 -- setup/hugepages.sh@97 -- # anon=0 00:04:35.536 04:44:42 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:35.536 04:44:42 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.536 04:44:42 -- setup/common.sh@18 -- # local node= 00:04:35.536 04:44:42 -- setup/common.sh@19 -- # local var val 00:04:35.536 04:44:42 -- setup/common.sh@20 -- # local mem_f mem 00:04:35.536 04:44:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.536 04:44:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.536 04:44:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.536 04:44:42 -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.536 04:44:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.536 04:44:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8598128 kB' 'MemAvailable: 10574232 kB' 'Buffers: 3456 kB' 'Cached: 2187136 kB' 'SwapCached: 0 kB' 'Active: 857804 kB' 'Inactive: 1450036 kB' 'Active(anon): 127720 kB' 'Inactive(anon): 0 kB' 'Active(file): 730084 kB' 'Inactive(file): 1450036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118828 kB' 'Mapped: 51572 kB' 'Shmem: 10472 kB' 'KReclaimable: 65848 kB' 'Slab: 138988 kB' 'SReclaimable: 65848 kB' 'SUnreclaim: 73140 kB' 'KernelStack: 6460 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 344620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.536 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.536 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.537 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.537 04:44:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.538 04:44:42 -- setup/common.sh@33 -- # echo 0 00:04:35.538 04:44:42 -- setup/common.sh@33 -- # return 0 00:04:35.538 04:44:42 -- setup/hugepages.sh@99 -- # surp=0 00:04:35.538 04:44:42 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:35.538 04:44:42 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:35.538 04:44:42 -- setup/common.sh@18 -- # local node= 00:04:35.538 04:44:42 -- setup/common.sh@19 -- # local var val 00:04:35.538 04:44:42 -- setup/common.sh@20 -- # local mem_f mem 00:04:35.538 04:44:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.538 04:44:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.538 04:44:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.538 04:44:42 -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.538 04:44:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.538 04:44:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8598128 kB' 'MemAvailable: 10574232 kB' 'Buffers: 3456 kB' 'Cached: 2187140 kB' 'SwapCached: 0 kB' 'Active: 857488 kB' 'Inactive: 1450036 kB' 'Active(anon): 127404 kB' 'Inactive(anon): 0 kB' 'Active(file): 730084 kB' 'Inactive(file): 1450036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118836 kB' 'Mapped: 51892 kB' 'Shmem: 10472 kB' 'KReclaimable: 65848 kB' 'Slab: 138976 kB' 'SReclaimable: 65848 kB' 'SUnreclaim: 73128 kB' 'KernelStack: 6476 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 347536 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.538 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.538 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.539 04:44:42 -- setup/common.sh@33 -- # echo 0 00:04:35.539 04:44:42 -- setup/common.sh@33 -- # return 0 00:04:35.539 04:44:42 -- setup/hugepages.sh@100 -- # resv=0 00:04:35.539 04:44:42 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:35.539 nr_hugepages=512 00:04:35.539 resv_hugepages=0 00:04:35.539 04:44:42 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:35.539 surplus_hugepages=0 00:04:35.539 04:44:42 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:35.539 anon_hugepages=0 00:04:35.539 04:44:42 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:35.539 04:44:42 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:35.539 04:44:42 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:35.539 04:44:42 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:35.539 04:44:42 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:35.539 04:44:42 -- setup/common.sh@18 -- # local node= 00:04:35.539 04:44:42 -- setup/common.sh@19 -- # local var val 00:04:35.539 04:44:42 -- setup/common.sh@20 -- # local mem_f mem 00:04:35.539 04:44:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.539 04:44:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.539 04:44:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.539 04:44:42 -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.539 04:44:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.539 04:44:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8598128 kB' 'MemAvailable: 10574232 kB' 'Buffers: 3456 kB' 'Cached: 2187140 kB' 'SwapCached: 0 kB' 'Active: 857444 kB' 'Inactive: 1450036 kB' 'Active(anon): 127360 kB' 'Inactive(anon): 0 kB' 'Active(file): 730084 kB' 'Inactive(file): 1450036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118584 kB' 'Mapped: 51692 kB' 'Shmem: 10472 kB' 'KReclaimable: 65848 kB' 'Slab: 138976 kB' 'SReclaimable: 65848 kB' 'SUnreclaim: 73128 kB' 'KernelStack: 6476 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 344620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.539 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.539 04:44:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.540 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.540 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.541 04:44:42 -- setup/common.sh@33 -- # echo 512 00:04:35.541 04:44:42 -- setup/common.sh@33 -- # return 0 00:04:35.541 04:44:42 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:35.541 04:44:42 -- setup/hugepages.sh@112 -- # get_nodes 00:04:35.541 04:44:42 -- setup/hugepages.sh@27 -- # local node 00:04:35.541 04:44:42 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:35.541 04:44:42 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:35.541 04:44:42 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:35.541 04:44:42 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:35.541 04:44:42 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:35.541 04:44:42 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:35.541 04:44:42 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:35.541 04:44:42 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.541 04:44:42 -- setup/common.sh@18 -- # local node=0 00:04:35.541 04:44:42 -- setup/common.sh@19 -- # local var val 00:04:35.541 04:44:42 -- setup/common.sh@20 -- # local mem_f mem 00:04:35.541 04:44:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.541 04:44:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:35.541 04:44:42 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:35.541 04:44:42 -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.541 04:44:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.541 04:44:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8598128 kB' 'MemUsed: 3643844 kB' 'SwapCached: 0 kB' 'Active: 857496 kB' 'Inactive: 1450036 kB' 'Active(anon): 127412 kB' 'Inactive(anon): 0 kB' 'Active(file): 730084 kB' 'Inactive(file): 1450036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 2190596 kB' 'Mapped: 51524 kB' 'AnonPages: 118612 kB' 'Shmem: 10472 kB' 'KernelStack: 6448 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 65848 kB' 'Slab: 138992 kB' 'SReclaimable: 65848 kB' 'SUnreclaim: 73144 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.541 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.541 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.542 04:44:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.542 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.542 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.542 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.542 04:44:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.542 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.542 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.542 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.542 04:44:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.542 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.542 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.542 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.542 04:44:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.542 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.542 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.542 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.542 04:44:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.542 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.542 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.542 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.542 04:44:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.542 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.542 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.542 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.542 04:44:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.542 04:44:42 -- setup/common.sh@32 -- # continue 00:04:35.542 04:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.542 04:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.542 04:44:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.542 04:44:42 -- setup/common.sh@33 -- # echo 0 00:04:35.542 04:44:42 -- setup/common.sh@33 -- # return 0 00:04:35.542 04:44:42 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:35.542 04:44:42 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:35.542 04:44:42 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:35.542 04:44:42 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:35.542 node0=512 expecting 512 00:04:35.542 04:44:42 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:35.542 04:44:42 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:35.542 00:04:35.542 real 0m0.681s 00:04:35.542 user 0m0.310s 00:04:35.542 sys 0m0.416s 00:04:35.542 04:44:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.542 04:44:42 -- common/autotest_common.sh@10 -- # set +x 00:04:35.542 ************************************ 00:04:35.542 END TEST custom_alloc 00:04:35.542 ************************************ 00:04:35.542 04:44:42 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:35.542 04:44:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:35.542 04:44:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:35.542 04:44:42 -- common/autotest_common.sh@10 -- # set +x 00:04:35.542 ************************************ 00:04:35.542 START TEST no_shrink_alloc 00:04:35.542 ************************************ 00:04:35.542 04:44:42 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:04:35.542 04:44:42 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:35.542 04:44:42 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:35.542 04:44:42 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:35.542 04:44:42 -- setup/hugepages.sh@51 -- # shift 00:04:35.542 04:44:42 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:35.542 04:44:42 -- setup/hugepages.sh@52 -- # local node_ids 00:04:35.542 04:44:42 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:35.542 04:44:42 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:35.542 04:44:42 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:35.542 04:44:42 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:35.542 04:44:42 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:35.542 04:44:42 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:35.542 04:44:42 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:35.542 04:44:42 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:35.542 04:44:42 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:35.542 04:44:42 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:35.542 04:44:42 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:35.542 04:44:42 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:35.542 04:44:42 -- setup/hugepages.sh@73 -- # return 0 00:04:35.542 04:44:42 -- setup/hugepages.sh@198 -- # setup output 00:04:35.542 04:44:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.542 04:44:42 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:36.111 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:36.111 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:36.111 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:36.111 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:36.111 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:36.111 04:44:43 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:36.111 04:44:43 -- setup/hugepages.sh@89 -- # local node 00:04:36.111 04:44:43 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:36.111 04:44:43 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:36.111 04:44:43 -- setup/hugepages.sh@92 -- # local surp 00:04:36.111 04:44:43 -- setup/hugepages.sh@93 -- # local resv 00:04:36.111 04:44:43 -- setup/hugepages.sh@94 -- # local anon 00:04:36.111 04:44:43 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:36.111 04:44:43 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:36.111 04:44:43 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:36.111 04:44:43 -- setup/common.sh@18 -- # local node= 00:04:36.111 04:44:43 -- setup/common.sh@19 -- # local var val 00:04:36.111 04:44:43 -- setup/common.sh@20 -- # local mem_f mem 00:04:36.111 04:44:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.111 04:44:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.111 04:44:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.111 04:44:43 -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.111 04:44:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.111 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.111 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.111 04:44:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7556748 kB' 'MemAvailable: 9532852 kB' 'Buffers: 3456 kB' 'Cached: 2187140 kB' 'SwapCached: 0 kB' 'Active: 854924 kB' 'Inactive: 1450040 kB' 'Active(anon): 124840 kB' 'Inactive(anon): 0 kB' 'Active(file): 730084 kB' 'Inactive(file): 1450040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 115924 kB' 'Mapped: 50912 kB' 'Shmem: 10472 kB' 'KReclaimable: 65840 kB' 'Slab: 138960 kB' 'SReclaimable: 65840 kB' 'SUnreclaim: 73120 kB' 'KernelStack: 6448 kB' 'PageTables: 4036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 334636 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:04:36.111 04:44:43 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.111 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.112 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.112 04:44:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.112 04:44:43 -- setup/common.sh@33 -- # echo 0 00:04:36.112 04:44:43 -- setup/common.sh@33 -- # return 0 00:04:36.112 04:44:43 -- setup/hugepages.sh@97 -- # anon=0 00:04:36.112 04:44:43 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:36.112 04:44:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:36.112 04:44:43 -- setup/common.sh@18 -- # local node= 00:04:36.112 04:44:43 -- setup/common.sh@19 -- # local var val 00:04:36.112 04:44:43 -- setup/common.sh@20 -- # local mem_f mem 00:04:36.112 04:44:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.112 04:44:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.112 04:44:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.113 04:44:43 -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.113 04:44:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.113 04:44:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7556748 kB' 'MemAvailable: 9532852 kB' 'Buffers: 3456 kB' 'Cached: 2187140 kB' 'SwapCached: 0 kB' 'Active: 854956 kB' 'Inactive: 1450040 kB' 'Active(anon): 124872 kB' 'Inactive(anon): 0 kB' 'Active(file): 730084 kB' 'Inactive(file): 1450040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 115984 kB' 'Mapped: 50784 kB' 'Shmem: 10472 kB' 'KReclaimable: 65840 kB' 'Slab: 138936 kB' 'SReclaimable: 65840 kB' 'SUnreclaim: 73096 kB' 'KernelStack: 6384 kB' 'PageTables: 3856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 334636 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.113 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.113 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.114 04:44:43 -- setup/common.sh@33 -- # echo 0 00:04:36.114 04:44:43 -- setup/common.sh@33 -- # return 0 00:04:36.114 04:44:43 -- setup/hugepages.sh@99 -- # surp=0 00:04:36.114 04:44:43 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:36.114 04:44:43 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:36.114 04:44:43 -- setup/common.sh@18 -- # local node= 00:04:36.114 04:44:43 -- setup/common.sh@19 -- # local var val 00:04:36.114 04:44:43 -- setup/common.sh@20 -- # local mem_f mem 00:04:36.114 04:44:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.114 04:44:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.114 04:44:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.114 04:44:43 -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.114 04:44:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.114 04:44:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7556748 kB' 'MemAvailable: 9532852 kB' 'Buffers: 3456 kB' 'Cached: 2187140 kB' 'SwapCached: 0 kB' 'Active: 854876 kB' 'Inactive: 1450040 kB' 'Active(anon): 124792 kB' 'Inactive(anon): 0 kB' 'Active(file): 730084 kB' 'Inactive(file): 1450040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 115932 kB' 'Mapped: 50784 kB' 'Shmem: 10472 kB' 'KReclaimable: 65840 kB' 'Slab: 138924 kB' 'SReclaimable: 65840 kB' 'SUnreclaim: 73084 kB' 'KernelStack: 6400 kB' 'PageTables: 3904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 334636 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.114 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.114 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.115 04:44:43 -- setup/common.sh@33 -- # echo 0 00:04:36.115 04:44:43 -- setup/common.sh@33 -- # return 0 00:04:36.115 04:44:43 -- setup/hugepages.sh@100 -- # resv=0 00:04:36.115 04:44:43 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:36.115 nr_hugepages=1024 00:04:36.115 resv_hugepages=0 00:04:36.115 04:44:43 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:36.115 surplus_hugepages=0 00:04:36.115 04:44:43 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:36.115 anon_hugepages=0 00:04:36.115 04:44:43 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:36.115 04:44:43 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:36.115 04:44:43 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:36.115 04:44:43 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:36.115 04:44:43 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:36.115 04:44:43 -- setup/common.sh@18 -- # local node= 00:04:36.115 04:44:43 -- setup/common.sh@19 -- # local var val 00:04:36.115 04:44:43 -- setup/common.sh@20 -- # local mem_f mem 00:04:36.115 04:44:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.115 04:44:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.115 04:44:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.115 04:44:43 -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.115 04:44:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.115 04:44:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7556748 kB' 'MemAvailable: 9532852 kB' 'Buffers: 3456 kB' 'Cached: 2187140 kB' 'SwapCached: 0 kB' 'Active: 854668 kB' 'Inactive: 1450040 kB' 'Active(anon): 124584 kB' 'Inactive(anon): 0 kB' 'Active(file): 730084 kB' 'Inactive(file): 1450040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 115688 kB' 'Mapped: 50784 kB' 'Shmem: 10472 kB' 'KReclaimable: 65840 kB' 'Slab: 138908 kB' 'SReclaimable: 65840 kB' 'SUnreclaim: 73068 kB' 'KernelStack: 6384 kB' 'PageTables: 3856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 334636 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.115 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.115 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.116 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.116 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.377 04:44:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.377 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.377 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.377 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.377 04:44:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.377 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.377 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.377 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.377 04:44:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.377 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.377 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.377 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.377 04:44:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.377 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.377 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.377 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.377 04:44:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.377 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.377 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.377 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.377 04:44:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.377 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.377 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.377 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.377 04:44:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.377 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.377 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.377 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.377 04:44:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.377 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.377 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.377 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.377 04:44:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.377 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.377 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.377 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.377 04:44:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.377 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.377 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.377 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.377 04:44:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.377 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.377 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.377 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.377 04:44:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.377 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.377 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.377 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.377 04:44:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.377 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.377 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.377 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.377 04:44:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.377 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.377 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.377 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.377 04:44:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.377 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.377 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.377 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.377 04:44:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.377 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.377 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.377 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.377 04:44:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.377 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.377 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.377 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.377 04:44:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.377 04:44:43 -- setup/common.sh@33 -- # echo 1024 00:04:36.377 04:44:43 -- setup/common.sh@33 -- # return 0 00:04:36.377 04:44:43 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:36.377 04:44:43 -- setup/hugepages.sh@112 -- # get_nodes 00:04:36.377 04:44:43 -- setup/hugepages.sh@27 -- # local node 00:04:36.377 04:44:43 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:36.377 04:44:43 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:36.377 04:44:43 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:36.377 04:44:43 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:36.377 04:44:43 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:36.378 04:44:43 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:36.378 04:44:43 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:36.378 04:44:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:36.378 04:44:43 -- setup/common.sh@18 -- # local node=0 00:04:36.378 04:44:43 -- setup/common.sh@19 -- # local var val 00:04:36.378 04:44:43 -- setup/common.sh@20 -- # local mem_f mem 00:04:36.378 04:44:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.378 04:44:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:36.378 04:44:43 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:36.378 04:44:43 -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.378 04:44:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.378 04:44:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7556748 kB' 'MemUsed: 4685224 kB' 'SwapCached: 0 kB' 'Active: 854876 kB' 'Inactive: 1450040 kB' 'Active(anon): 124792 kB' 'Inactive(anon): 0 kB' 'Active(file): 730084 kB' 'Inactive(file): 1450040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2190596 kB' 'Mapped: 50784 kB' 'AnonPages: 115896 kB' 'Shmem: 10472 kB' 'KernelStack: 6368 kB' 'PageTables: 3808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 65840 kB' 'Slab: 138904 kB' 'SReclaimable: 65840 kB' 'SUnreclaim: 73064 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.378 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.378 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.379 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.379 04:44:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.379 04:44:43 -- setup/common.sh@33 -- # echo 0 00:04:36.379 04:44:43 -- setup/common.sh@33 -- # return 0 00:04:36.379 04:44:43 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:36.379 04:44:43 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:36.379 04:44:43 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:36.379 04:44:43 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:36.379 04:44:43 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:36.379 node0=1024 expecting 1024 00:04:36.379 04:44:43 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:36.379 04:44:43 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:36.379 04:44:43 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:36.379 04:44:43 -- setup/hugepages.sh@202 -- # setup output 00:04:36.379 04:44:43 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.379 04:44:43 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:36.638 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:36.638 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:36.638 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:36.638 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:36.638 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:36.900 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:36.900 04:44:43 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:36.900 04:44:43 -- setup/hugepages.sh@89 -- # local node 00:04:36.900 04:44:43 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:36.900 04:44:43 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:36.900 04:44:43 -- setup/hugepages.sh@92 -- # local surp 00:04:36.900 04:44:43 -- setup/hugepages.sh@93 -- # local resv 00:04:36.900 04:44:43 -- setup/hugepages.sh@94 -- # local anon 00:04:36.900 04:44:43 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:36.900 04:44:43 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:36.900 04:44:43 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:36.900 04:44:43 -- setup/common.sh@18 -- # local node= 00:04:36.900 04:44:43 -- setup/common.sh@19 -- # local var val 00:04:36.900 04:44:43 -- setup/common.sh@20 -- # local mem_f mem 00:04:36.900 04:44:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.900 04:44:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.900 04:44:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.900 04:44:43 -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.900 04:44:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.900 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.900 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.900 04:44:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7556244 kB' 'MemAvailable: 9532348 kB' 'Buffers: 3456 kB' 'Cached: 2187140 kB' 'SwapCached: 0 kB' 'Active: 855876 kB' 'Inactive: 1450040 kB' 'Active(anon): 125792 kB' 'Inactive(anon): 0 kB' 'Active(file): 730084 kB' 'Inactive(file): 1450040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116980 kB' 'Mapped: 50964 kB' 'Shmem: 10472 kB' 'KReclaimable: 65840 kB' 'Slab: 138868 kB' 'SReclaimable: 65840 kB' 'SUnreclaim: 73028 kB' 'KernelStack: 6540 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 334636 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:04:36.900 04:44:43 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.900 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.900 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.900 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.900 04:44:43 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.900 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.900 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.900 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.900 04:44:43 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.900 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.900 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.900 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.900 04:44:43 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 04:44:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.901 04:44:43 -- setup/common.sh@33 -- # echo 0 00:04:36.901 04:44:43 -- setup/common.sh@33 -- # return 0 00:04:36.901 04:44:43 -- setup/hugepages.sh@97 -- # anon=0 00:04:36.901 04:44:43 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:36.901 04:44:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:36.901 04:44:43 -- setup/common.sh@18 -- # local node= 00:04:36.901 04:44:43 -- setup/common.sh@19 -- # local var val 00:04:36.901 04:44:43 -- setup/common.sh@20 -- # local mem_f mem 00:04:36.901 04:44:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.902 04:44:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.902 04:44:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.902 04:44:43 -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.902 04:44:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 04:44:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7556244 kB' 'MemAvailable: 9532348 kB' 'Buffers: 3456 kB' 'Cached: 2187140 kB' 'SwapCached: 0 kB' 'Active: 855064 kB' 'Inactive: 1450040 kB' 'Active(anon): 124980 kB' 'Inactive(anon): 0 kB' 'Active(file): 730084 kB' 'Inactive(file): 1450040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 115820 kB' 'Mapped: 50788 kB' 'Shmem: 10472 kB' 'KReclaimable: 65840 kB' 'Slab: 138868 kB' 'SReclaimable: 65840 kB' 'SUnreclaim: 73028 kB' 'KernelStack: 6364 kB' 'PageTables: 3700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 334636 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.902 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.902 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.903 04:44:43 -- setup/common.sh@33 -- # echo 0 00:04:36.903 04:44:43 -- setup/common.sh@33 -- # return 0 00:04:36.903 04:44:43 -- setup/hugepages.sh@99 -- # surp=0 00:04:36.903 04:44:43 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:36.903 04:44:43 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:36.903 04:44:43 -- setup/common.sh@18 -- # local node= 00:04:36.903 04:44:43 -- setup/common.sh@19 -- # local var val 00:04:36.903 04:44:43 -- setup/common.sh@20 -- # local mem_f mem 00:04:36.903 04:44:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.903 04:44:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.903 04:44:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.903 04:44:43 -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.903 04:44:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 04:44:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7556244 kB' 'MemAvailable: 9532348 kB' 'Buffers: 3456 kB' 'Cached: 2187140 kB' 'SwapCached: 0 kB' 'Active: 854844 kB' 'Inactive: 1450040 kB' 'Active(anon): 124760 kB' 'Inactive(anon): 0 kB' 'Active(file): 730084 kB' 'Inactive(file): 1450040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 115636 kB' 'Mapped: 50668 kB' 'Shmem: 10472 kB' 'KReclaimable: 65840 kB' 'Slab: 138876 kB' 'SReclaimable: 65840 kB' 'SUnreclaim: 73036 kB' 'KernelStack: 6380 kB' 'PageTables: 3740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 334636 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.903 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.904 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.905 04:44:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.905 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.905 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.905 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.905 04:44:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.905 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.905 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.905 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.905 04:44:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.905 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.905 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.905 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.905 04:44:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.905 04:44:43 -- setup/common.sh@33 -- # echo 0 00:04:36.905 04:44:43 -- setup/common.sh@33 -- # return 0 00:04:36.905 nr_hugepages=1024 00:04:36.905 resv_hugepages=0 00:04:36.905 surplus_hugepages=0 00:04:36.905 anon_hugepages=0 00:04:36.905 04:44:43 -- setup/hugepages.sh@100 -- # resv=0 00:04:36.905 04:44:43 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:36.905 04:44:43 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:36.905 04:44:43 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:36.905 04:44:43 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:36.905 04:44:43 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:36.905 04:44:43 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:36.905 04:44:43 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:36.905 04:44:43 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:36.905 04:44:43 -- setup/common.sh@18 -- # local node= 00:04:36.905 04:44:43 -- setup/common.sh@19 -- # local var val 00:04:36.905 04:44:43 -- setup/common.sh@20 -- # local mem_f mem 00:04:36.905 04:44:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.905 04:44:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.905 04:44:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.905 04:44:43 -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.905 04:44:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.905 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.905 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.905 04:44:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7556244 kB' 'MemAvailable: 9532348 kB' 'Buffers: 3456 kB' 'Cached: 2187140 kB' 'SwapCached: 0 kB' 'Active: 854832 kB' 'Inactive: 1450040 kB' 'Active(anon): 124748 kB' 'Inactive(anon): 0 kB' 'Active(file): 730084 kB' 'Inactive(file): 1450040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 115884 kB' 'Mapped: 50668 kB' 'Shmem: 10472 kB' 'KReclaimable: 65840 kB' 'Slab: 138876 kB' 'SReclaimable: 65840 kB' 'SUnreclaim: 73036 kB' 'KernelStack: 6380 kB' 'PageTables: 3740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 334636 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:04:36.905 04:44:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.905 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.905 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.905 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.905 04:44:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.905 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.905 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.905 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.905 04:44:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.905 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.905 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.905 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.905 04:44:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.905 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.905 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.905 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.905 04:44:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.905 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.905 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.905 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.905 04:44:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.905 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.905 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.905 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.905 04:44:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.905 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.905 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.905 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.905 04:44:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.905 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.905 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.905 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.905 04:44:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.905 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.905 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.905 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.905 04:44:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.905 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.905 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.905 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.905 04:44:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.905 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.905 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.905 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.905 04:44:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.905 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.905 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.905 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.905 04:44:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.905 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.905 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.905 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.905 04:44:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.905 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.905 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.905 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.905 04:44:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.905 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.905 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.905 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.905 04:44:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 04:44:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.906 04:44:43 -- setup/common.sh@33 -- # echo 1024 00:04:36.906 04:44:43 -- setup/common.sh@33 -- # return 0 00:04:36.906 04:44:43 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:36.906 04:44:43 -- setup/hugepages.sh@112 -- # get_nodes 00:04:36.906 04:44:43 -- setup/hugepages.sh@27 -- # local node 00:04:36.906 04:44:43 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:36.906 04:44:43 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:36.906 04:44:43 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:36.906 04:44:43 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:36.906 04:44:43 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:36.906 04:44:43 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:36.906 04:44:43 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:36.906 04:44:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:36.906 04:44:43 -- setup/common.sh@18 -- # local node=0 00:04:36.906 04:44:43 -- setup/common.sh@19 -- # local var val 00:04:36.906 04:44:43 -- setup/common.sh@20 -- # local mem_f mem 00:04:36.906 04:44:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.906 04:44:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:36.906 04:44:43 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:36.906 04:44:43 -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.906 04:44:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.907 04:44:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7555992 kB' 'MemUsed: 4685980 kB' 'SwapCached: 0 kB' 'Active: 854768 kB' 'Inactive: 1450040 kB' 'Active(anon): 124684 kB' 'Inactive(anon): 0 kB' 'Active(file): 730084 kB' 'Inactive(file): 1450040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2190596 kB' 'Mapped: 50784 kB' 'AnonPages: 115888 kB' 'Shmem: 10472 kB' 'KernelStack: 6400 kB' 'PageTables: 3916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 65840 kB' 'Slab: 138876 kB' 'SReclaimable: 65840 kB' 'SUnreclaim: 73036 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # continue 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.907 04:44:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.907 04:44:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.907 04:44:43 -- setup/common.sh@33 -- # echo 0 00:04:36.907 04:44:43 -- setup/common.sh@33 -- # return 0 00:04:36.907 04:44:43 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:36.907 04:44:43 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:36.907 04:44:43 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:36.907 04:44:43 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:36.907 node0=1024 expecting 1024 00:04:36.907 04:44:43 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:36.907 04:44:43 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:36.907 00:04:36.907 real 0m1.344s 00:04:36.907 user 0m0.636s 00:04:36.907 sys 0m0.787s 00:04:36.907 04:44:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.907 04:44:43 -- common/autotest_common.sh@10 -- # set +x 00:04:36.907 ************************************ 00:04:36.907 END TEST no_shrink_alloc 00:04:36.908 ************************************ 00:04:36.908 04:44:43 -- setup/hugepages.sh@217 -- # clear_hp 00:04:36.908 04:44:43 -- setup/hugepages.sh@37 -- # local node hp 00:04:36.908 04:44:43 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:36.908 04:44:43 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:36.908 04:44:43 -- setup/hugepages.sh@41 -- # echo 0 00:04:36.908 04:44:43 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:36.908 04:44:43 -- setup/hugepages.sh@41 -- # echo 0 00:04:36.908 04:44:43 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:36.908 04:44:43 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:36.908 00:04:36.908 real 0m6.017s 00:04:36.908 user 0m2.761s 00:04:36.908 sys 0m3.479s 00:04:36.908 04:44:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.908 ************************************ 00:04:36.908 END TEST hugepages 00:04:36.908 ************************************ 00:04:36.908 04:44:43 -- common/autotest_common.sh@10 -- # set +x 00:04:37.166 04:44:44 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:37.166 04:44:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:37.166 04:44:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:37.166 04:44:44 -- common/autotest_common.sh@10 -- # set +x 00:04:37.166 ************************************ 00:04:37.166 START TEST driver 00:04:37.167 ************************************ 00:04:37.167 04:44:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:37.167 * Looking for test storage... 00:04:37.167 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:37.167 04:44:44 -- setup/driver.sh@68 -- # setup reset 00:04:37.167 04:44:44 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:37.167 04:44:44 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:43.734 04:44:50 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:43.734 04:44:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:43.734 04:44:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:43.734 04:44:50 -- common/autotest_common.sh@10 -- # set +x 00:04:43.734 ************************************ 00:04:43.734 START TEST guess_driver 00:04:43.734 ************************************ 00:04:43.734 04:44:50 -- common/autotest_common.sh@1104 -- # guess_driver 00:04:43.734 04:44:50 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:43.734 04:44:50 -- setup/driver.sh@47 -- # local fail=0 00:04:43.734 04:44:50 -- setup/driver.sh@49 -- # pick_driver 00:04:43.734 04:44:50 -- setup/driver.sh@36 -- # vfio 00:04:43.734 04:44:50 -- setup/driver.sh@21 -- # local iommu_grups 00:04:43.734 04:44:50 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:43.734 04:44:50 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:43.734 04:44:50 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:43.734 04:44:50 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:43.734 04:44:50 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:43.734 04:44:50 -- setup/driver.sh@32 -- # return 1 00:04:43.734 04:44:50 -- setup/driver.sh@38 -- # uio 00:04:43.734 04:44:50 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:43.734 04:44:50 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:43.734 04:44:50 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:43.734 04:44:50 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:43.734 04:44:50 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:43.734 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:43.734 04:44:50 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:43.734 04:44:50 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:43.734 04:44:50 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:43.734 Looking for driver=uio_pci_generic 00:04:43.734 04:44:50 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:43.734 04:44:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:43.734 04:44:50 -- setup/driver.sh@45 -- # setup output config 00:04:43.734 04:44:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.734 04:44:50 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:43.993 lsblk: /dev/nvme0c0n1: not a block device 00:04:43.993 04:44:51 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:43.993 04:44:51 -- setup/driver.sh@58 -- # continue 00:04:43.993 04:44:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.252 04:44:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.252 04:44:51 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:44.252 04:44:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.252 04:44:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.252 04:44:51 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:44.252 04:44:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.252 04:44:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.252 04:44:51 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:44.252 04:44:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.252 04:44:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.252 04:44:51 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:44.252 04:44:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.510 04:44:51 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:44.510 04:44:51 -- setup/driver.sh@65 -- # setup reset 00:04:44.510 04:44:51 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:44.510 04:44:51 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:51.067 00:04:51.067 real 0m7.271s 00:04:51.067 user 0m0.877s 00:04:51.067 sys 0m1.541s 00:04:51.067 04:44:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.067 04:44:57 -- common/autotest_common.sh@10 -- # set +x 00:04:51.067 ************************************ 00:04:51.067 END TEST guess_driver 00:04:51.067 ************************************ 00:04:51.067 ************************************ 00:04:51.067 END TEST driver 00:04:51.067 ************************************ 00:04:51.067 00:04:51.067 real 0m13.297s 00:04:51.067 user 0m1.278s 00:04:51.067 sys 0m2.334s 00:04:51.067 04:44:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.067 04:44:57 -- common/autotest_common.sh@10 -- # set +x 00:04:51.067 04:44:57 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:51.067 04:44:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:51.067 04:44:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:51.067 04:44:57 -- common/autotest_common.sh@10 -- # set +x 00:04:51.067 ************************************ 00:04:51.067 START TEST devices 00:04:51.067 ************************************ 00:04:51.067 04:44:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:51.067 * Looking for test storage... 00:04:51.067 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:51.067 04:44:57 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:51.067 04:44:57 -- setup/devices.sh@192 -- # setup reset 00:04:51.067 04:44:57 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:51.067 04:44:57 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:51.632 04:44:58 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:51.632 04:44:58 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:51.632 04:44:58 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:51.632 04:44:58 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:51.632 04:44:58 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:51.632 04:44:58 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0c0n1 00:04:51.632 04:44:58 -- common/autotest_common.sh@1647 -- # local device=nvme0c0n1 00:04:51.632 04:44:58 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:04:51.632 04:44:58 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:51.632 04:44:58 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:51.632 04:44:58 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:51.632 04:44:58 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:51.632 04:44:58 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:51.632 04:44:58 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:51.632 04:44:58 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:51.632 04:44:58 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:04:51.632 04:44:58 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:04:51.632 04:44:58 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:51.632 04:44:58 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:51.632 04:44:58 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:51.632 04:44:58 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:04:51.632 04:44:58 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:04:51.632 04:44:58 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:51.632 04:44:58 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:51.632 04:44:58 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:51.632 04:44:58 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:04:51.632 04:44:58 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:04:51.632 04:44:58 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:51.632 04:44:58 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:51.632 04:44:58 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:51.632 04:44:58 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n1 00:04:51.632 04:44:58 -- common/autotest_common.sh@1647 -- # local device=nvme2n1 00:04:51.632 04:44:58 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:51.632 04:44:58 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:51.632 04:44:58 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:51.632 04:44:58 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3n1 00:04:51.632 04:44:58 -- common/autotest_common.sh@1647 -- # local device=nvme3n1 00:04:51.632 04:44:58 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:51.632 04:44:58 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:51.632 04:44:58 -- setup/devices.sh@196 -- # blocks=() 00:04:51.632 04:44:58 -- setup/devices.sh@196 -- # declare -a blocks 00:04:51.632 04:44:58 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:51.632 04:44:58 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:51.632 04:44:58 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:51.632 04:44:58 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:51.632 04:44:58 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:51.632 04:44:58 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:51.632 04:44:58 -- setup/devices.sh@202 -- # pci=0000:00:09.0 00:04:51.632 04:44:58 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\9\.\0* ]] 00:04:51.632 04:44:58 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:51.632 04:44:58 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:51.632 04:44:58 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:51.632 No valid GPT data, bailing 00:04:51.632 04:44:58 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:51.632 04:44:58 -- scripts/common.sh@393 -- # pt= 00:04:51.632 04:44:58 -- scripts/common.sh@394 -- # return 1 00:04:51.632 04:44:58 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:51.632 04:44:58 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:51.632 04:44:58 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:51.632 04:44:58 -- setup/common.sh@80 -- # echo 1073741824 00:04:51.632 04:44:58 -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:04:51.632 04:44:58 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:51.632 04:44:58 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:51.632 04:44:58 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:51.632 04:44:58 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:04:51.632 04:44:58 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:51.632 04:44:58 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:51.632 04:44:58 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:04:51.632 04:44:58 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:51.890 No valid GPT data, bailing 00:04:51.890 04:44:58 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:51.890 04:44:58 -- scripts/common.sh@393 -- # pt= 00:04:51.890 04:44:58 -- scripts/common.sh@394 -- # return 1 00:04:51.890 04:44:58 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:51.890 04:44:58 -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:51.890 04:44:58 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:51.890 04:44:58 -- setup/common.sh@80 -- # echo 4294967296 00:04:51.890 04:44:58 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:51.890 04:44:58 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:51.890 04:44:58 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:04:51.890 04:44:58 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:51.890 04:44:58 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:04:51.890 04:44:58 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:51.890 04:44:58 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:04:51.890 04:44:58 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:51.890 04:44:58 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:04:51.890 04:44:58 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:04:51.890 04:44:58 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:04:51.890 No valid GPT data, bailing 00:04:51.890 04:44:58 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:51.890 04:44:58 -- scripts/common.sh@393 -- # pt= 00:04:51.890 04:44:58 -- scripts/common.sh@394 -- # return 1 00:04:51.890 04:44:58 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:04:51.890 04:44:58 -- setup/common.sh@76 -- # local dev=nvme1n2 00:04:51.890 04:44:58 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:04:51.890 04:44:58 -- setup/common.sh@80 -- # echo 4294967296 00:04:51.890 04:44:58 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:51.890 04:44:58 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:51.890 04:44:58 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:04:51.890 04:44:58 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:51.890 04:44:58 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:04:51.890 04:44:58 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:51.890 04:44:58 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:04:51.890 04:44:58 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:51.890 04:44:58 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:04:51.890 04:44:58 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:04:51.890 04:44:58 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:04:51.890 No valid GPT data, bailing 00:04:51.890 04:44:58 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:51.890 04:44:58 -- scripts/common.sh@393 -- # pt= 00:04:51.890 04:44:58 -- scripts/common.sh@394 -- # return 1 00:04:51.890 04:44:58 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:04:51.890 04:44:58 -- setup/common.sh@76 -- # local dev=nvme1n3 00:04:51.890 04:44:58 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:04:51.890 04:44:58 -- setup/common.sh@80 -- # echo 4294967296 00:04:51.890 04:44:58 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:51.890 04:44:58 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:51.890 04:44:58 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:04:51.890 04:44:58 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:51.890 04:44:58 -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:04:51.890 04:44:58 -- setup/devices.sh@201 -- # ctrl=nvme2 00:04:51.890 04:44:58 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:04:51.890 04:44:58 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:51.890 04:44:58 -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:04:51.890 04:44:58 -- scripts/common.sh@380 -- # local block=nvme2n1 pt 00:04:51.890 04:44:58 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:04:52.149 No valid GPT data, bailing 00:04:52.149 04:44:59 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:52.149 04:44:59 -- scripts/common.sh@393 -- # pt= 00:04:52.149 04:44:59 -- scripts/common.sh@394 -- # return 1 00:04:52.149 04:44:59 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:04:52.149 04:44:59 -- setup/common.sh@76 -- # local dev=nvme2n1 00:04:52.149 04:44:59 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:04:52.149 04:44:59 -- setup/common.sh@80 -- # echo 6343335936 00:04:52.149 04:44:59 -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:04:52.149 04:44:59 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:52.149 04:44:59 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:04:52.149 04:44:59 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:52.149 04:44:59 -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:04:52.149 04:44:59 -- setup/devices.sh@201 -- # ctrl=nvme3 00:04:52.149 04:44:59 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:52.149 04:44:59 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:52.149 04:44:59 -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:04:52.149 04:44:59 -- scripts/common.sh@380 -- # local block=nvme3n1 pt 00:04:52.149 04:44:59 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:04:52.149 No valid GPT data, bailing 00:04:52.149 04:44:59 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:52.149 04:44:59 -- scripts/common.sh@393 -- # pt= 00:04:52.149 04:44:59 -- scripts/common.sh@394 -- # return 1 00:04:52.149 04:44:59 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:04:52.149 04:44:59 -- setup/common.sh@76 -- # local dev=nvme3n1 00:04:52.149 04:44:59 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:04:52.149 04:44:59 -- setup/common.sh@80 -- # echo 5368709120 00:04:52.149 04:44:59 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:52.149 04:44:59 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:52.149 04:44:59 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:52.149 04:44:59 -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:04:52.149 04:44:59 -- setup/devices.sh@211 -- # declare -r test_disk=nvme1n1 00:04:52.149 04:44:59 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:52.149 04:44:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:52.149 04:44:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:52.149 04:44:59 -- common/autotest_common.sh@10 -- # set +x 00:04:52.149 ************************************ 00:04:52.149 START TEST nvme_mount 00:04:52.149 ************************************ 00:04:52.149 04:44:59 -- common/autotest_common.sh@1104 -- # nvme_mount 00:04:52.149 04:44:59 -- setup/devices.sh@95 -- # nvme_disk=nvme1n1 00:04:52.149 04:44:59 -- setup/devices.sh@96 -- # nvme_disk_p=nvme1n1p1 00:04:52.149 04:44:59 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:52.149 04:44:59 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:52.149 04:44:59 -- setup/devices.sh@101 -- # partition_drive nvme1n1 1 00:04:52.149 04:44:59 -- setup/common.sh@39 -- # local disk=nvme1n1 00:04:52.149 04:44:59 -- setup/common.sh@40 -- # local part_no=1 00:04:52.149 04:44:59 -- setup/common.sh@41 -- # local size=1073741824 00:04:52.149 04:44:59 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:52.149 04:44:59 -- setup/common.sh@44 -- # parts=() 00:04:52.149 04:44:59 -- setup/common.sh@44 -- # local parts 00:04:52.149 04:44:59 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:52.149 04:44:59 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:52.149 04:44:59 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:52.149 04:44:59 -- setup/common.sh@46 -- # (( part++ )) 00:04:52.149 04:44:59 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:52.149 04:44:59 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:52.149 04:44:59 -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:04:52.149 04:44:59 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 00:04:53.083 Creating new GPT entries in memory. 00:04:53.083 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:53.083 other utilities. 00:04:53.083 04:45:00 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:53.083 04:45:00 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:53.083 04:45:00 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:53.083 04:45:00 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:53.083 04:45:00 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:264191 00:04:54.460 Creating new GPT entries in memory. 00:04:54.460 The operation has completed successfully. 00:04:54.460 04:45:01 -- setup/common.sh@57 -- # (( part++ )) 00:04:54.460 04:45:01 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:54.460 04:45:01 -- setup/common.sh@62 -- # wait 54210 00:04:54.460 04:45:01 -- setup/devices.sh@102 -- # mkfs /dev/nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:54.460 04:45:01 -- setup/common.sh@66 -- # local dev=/dev/nvme1n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:54.460 04:45:01 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:54.460 04:45:01 -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1p1 ]] 00:04:54.460 04:45:01 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1p1 00:04:54.460 04:45:01 -- setup/common.sh@72 -- # mount /dev/nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:54.460 04:45:01 -- setup/devices.sh@105 -- # verify 0000:00:08.0 nvme1n1:nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:54.460 04:45:01 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:54.460 04:45:01 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1p1 00:04:54.460 04:45:01 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:54.460 04:45:01 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:54.460 04:45:01 -- setup/devices.sh@53 -- # local found=0 00:04:54.460 04:45:01 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:54.460 04:45:01 -- setup/devices.sh@56 -- # : 00:04:54.460 04:45:01 -- setup/devices.sh@59 -- # local pci status 00:04:54.460 04:45:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.460 04:45:01 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:54.460 04:45:01 -- setup/devices.sh@47 -- # setup output config 00:04:54.460 04:45:01 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.460 04:45:01 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:54.460 04:45:01 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:54.460 04:45:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.460 04:45:01 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:54.460 04:45:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.028 04:45:01 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:55.028 04:45:01 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1\p\1* ]] 00:04:55.028 04:45:01 -- setup/devices.sh@63 -- # found=1 00:04:55.028 04:45:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.028 04:45:01 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:55.028 04:45:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.028 lsblk: /dev/nvme0c0n1: not a block device 00:04:55.028 04:45:02 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:55.028 04:45:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.028 04:45:02 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:55.028 04:45:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.288 04:45:02 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:55.288 04:45:02 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:55.288 04:45:02 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:55.288 04:45:02 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:55.288 04:45:02 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:55.288 04:45:02 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:55.288 04:45:02 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:55.288 04:45:02 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:55.288 04:45:02 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:55.288 04:45:02 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:04:55.288 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:55.288 04:45:02 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:55.288 04:45:02 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:55.547 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:55.547 /dev/nvme1n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:55.547 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:55.547 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:04:55.547 04:45:02 -- setup/devices.sh@113 -- # mkfs /dev/nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:55.547 04:45:02 -- setup/common.sh@66 -- # local dev=/dev/nvme1n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:55.547 04:45:02 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:55.547 04:45:02 -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1 ]] 00:04:55.547 04:45:02 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1 1024M 00:04:55.547 04:45:02 -- setup/common.sh@72 -- # mount /dev/nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:55.547 04:45:02 -- setup/devices.sh@116 -- # verify 0000:00:08.0 nvme1n1:nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:55.547 04:45:02 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:55.547 04:45:02 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1 00:04:55.547 04:45:02 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:55.547 04:45:02 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:55.547 04:45:02 -- setup/devices.sh@53 -- # local found=0 00:04:55.547 04:45:02 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:55.547 04:45:02 -- setup/devices.sh@56 -- # : 00:04:55.547 04:45:02 -- setup/devices.sh@59 -- # local pci status 00:04:55.547 04:45:02 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:55.547 04:45:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.547 04:45:02 -- setup/devices.sh@47 -- # setup output config 00:04:55.547 04:45:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.547 04:45:02 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:55.547 04:45:02 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:55.547 04:45:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.806 04:45:02 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:55.806 04:45:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.065 04:45:03 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:56.065 04:45:03 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1* ]] 00:04:56.065 04:45:03 -- setup/devices.sh@63 -- # found=1 00:04:56.065 04:45:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.065 04:45:03 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:56.065 04:45:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.065 lsblk: /dev/nvme0c0n1: not a block device 00:04:56.324 04:45:03 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:56.324 04:45:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.324 04:45:03 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:56.324 04:45:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.324 04:45:03 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:56.324 04:45:03 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:56.324 04:45:03 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:56.324 04:45:03 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:56.324 04:45:03 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:56.324 04:45:03 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:56.324 04:45:03 -- setup/devices.sh@125 -- # verify 0000:00:08.0 data@nvme1n1 '' '' 00:04:56.324 04:45:03 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:56.324 04:45:03 -- setup/devices.sh@49 -- # local mounts=data@nvme1n1 00:04:56.324 04:45:03 -- setup/devices.sh@50 -- # local mount_point= 00:04:56.324 04:45:03 -- setup/devices.sh@51 -- # local test_file= 00:04:56.324 04:45:03 -- setup/devices.sh@53 -- # local found=0 00:04:56.324 04:45:03 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:56.324 04:45:03 -- setup/devices.sh@59 -- # local pci status 00:04:56.324 04:45:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.324 04:45:03 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:56.324 04:45:03 -- setup/devices.sh@47 -- # setup output config 00:04:56.324 04:45:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:56.324 04:45:03 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:56.583 04:45:03 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:56.583 04:45:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.583 04:45:03 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:56.583 04:45:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.149 04:45:04 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:57.149 04:45:04 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\1\n\1* ]] 00:04:57.149 04:45:04 -- setup/devices.sh@63 -- # found=1 00:04:57.149 04:45:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.149 04:45:04 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:57.149 04:45:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.149 lsblk: /dev/nvme0c0n1: not a block device 00:04:57.149 04:45:04 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:57.149 04:45:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.408 04:45:04 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:57.408 04:45:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.408 04:45:04 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:57.408 04:45:04 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:57.408 04:45:04 -- setup/devices.sh@68 -- # return 0 00:04:57.408 04:45:04 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:57.408 04:45:04 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:57.408 04:45:04 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:57.408 04:45:04 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:57.408 04:45:04 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:57.408 /dev/nvme1n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:57.408 00:04:57.408 real 0m5.211s 00:04:57.408 user 0m1.280s 00:04:57.408 sys 0m1.641s 00:04:57.408 04:45:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.408 04:45:04 -- common/autotest_common.sh@10 -- # set +x 00:04:57.408 ************************************ 00:04:57.408 END TEST nvme_mount 00:04:57.408 ************************************ 00:04:57.408 04:45:04 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:57.408 04:45:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:57.408 04:45:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:57.408 04:45:04 -- common/autotest_common.sh@10 -- # set +x 00:04:57.408 ************************************ 00:04:57.408 START TEST dm_mount 00:04:57.408 ************************************ 00:04:57.408 04:45:04 -- common/autotest_common.sh@1104 -- # dm_mount 00:04:57.408 04:45:04 -- setup/devices.sh@144 -- # pv=nvme1n1 00:04:57.408 04:45:04 -- setup/devices.sh@145 -- # pv0=nvme1n1p1 00:04:57.408 04:45:04 -- setup/devices.sh@146 -- # pv1=nvme1n1p2 00:04:57.408 04:45:04 -- setup/devices.sh@148 -- # partition_drive nvme1n1 00:04:57.408 04:45:04 -- setup/common.sh@39 -- # local disk=nvme1n1 00:04:57.408 04:45:04 -- setup/common.sh@40 -- # local part_no=2 00:04:57.408 04:45:04 -- setup/common.sh@41 -- # local size=1073741824 00:04:57.408 04:45:04 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:57.408 04:45:04 -- setup/common.sh@44 -- # parts=() 00:04:57.408 04:45:04 -- setup/common.sh@44 -- # local parts 00:04:57.408 04:45:04 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:57.408 04:45:04 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:57.408 04:45:04 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:57.408 04:45:04 -- setup/common.sh@46 -- # (( part++ )) 00:04:57.408 04:45:04 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:57.408 04:45:04 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:57.408 04:45:04 -- setup/common.sh@46 -- # (( part++ )) 00:04:57.408 04:45:04 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:57.408 04:45:04 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:57.408 04:45:04 -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:04:57.408 04:45:04 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 nvme1n1p2 00:04:58.343 Creating new GPT entries in memory. 00:04:58.343 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:58.343 other utilities. 00:04:58.343 04:45:05 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:58.343 04:45:05 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:58.343 04:45:05 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:58.343 04:45:05 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:58.606 04:45:05 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:264191 00:04:59.542 Creating new GPT entries in memory. 00:04:59.542 The operation has completed successfully. 00:04:59.542 04:45:06 -- setup/common.sh@57 -- # (( part++ )) 00:04:59.542 04:45:06 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:59.542 04:45:06 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:59.542 04:45:06 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:59.542 04:45:06 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=2:264192:526335 00:05:00.477 The operation has completed successfully. 00:05:00.477 04:45:07 -- setup/common.sh@57 -- # (( part++ )) 00:05:00.477 04:45:07 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:00.477 04:45:07 -- setup/common.sh@62 -- # wait 54928 00:05:00.477 04:45:07 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:00.477 04:45:07 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:00.477 04:45:07 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:00.477 04:45:07 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:00.477 04:45:07 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:00.477 04:45:07 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:00.477 04:45:07 -- setup/devices.sh@161 -- # break 00:05:00.477 04:45:07 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:00.477 04:45:07 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:00.477 04:45:07 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:00.477 04:45:07 -- setup/devices.sh@166 -- # dm=dm-0 00:05:00.477 04:45:07 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme1n1p1/holders/dm-0 ]] 00:05:00.477 04:45:07 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme1n1p2/holders/dm-0 ]] 00:05:00.477 04:45:07 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:00.477 04:45:07 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:00.477 04:45:07 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:00.477 04:45:07 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:00.477 04:45:07 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:00.477 04:45:07 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:00.477 04:45:07 -- setup/devices.sh@174 -- # verify 0000:00:08.0 nvme1n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:00.477 04:45:07 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:05:00.477 04:45:07 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme_dm_test 00:05:00.477 04:45:07 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:00.477 04:45:07 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:00.477 04:45:07 -- setup/devices.sh@53 -- # local found=0 00:05:00.477 04:45:07 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:00.477 04:45:07 -- setup/devices.sh@56 -- # : 00:05:00.477 04:45:07 -- setup/devices.sh@59 -- # local pci status 00:05:00.736 04:45:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.736 04:45:07 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:05:00.736 04:45:07 -- setup/devices.sh@47 -- # setup output config 00:05:00.736 04:45:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.736 04:45:07 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:00.736 04:45:07 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:00.736 04:45:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.995 04:45:07 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:00.995 04:45:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.253 04:45:08 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:01.253 04:45:08 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0,mount@nvme1n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:01.253 04:45:08 -- setup/devices.sh@63 -- # found=1 00:05:01.253 04:45:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.253 04:45:08 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:01.253 04:45:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.253 lsblk: /dev/nvme0c0n1: not a block device 00:05:01.253 04:45:08 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:01.253 04:45:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.512 04:45:08 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:01.512 04:45:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.512 04:45:08 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:01.512 04:45:08 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:01.512 04:45:08 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:01.512 04:45:08 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:01.512 04:45:08 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:01.512 04:45:08 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:01.512 04:45:08 -- setup/devices.sh@184 -- # verify 0000:00:08.0 holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 '' '' 00:05:01.512 04:45:08 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:05:01.512 04:45:08 -- setup/devices.sh@49 -- # local mounts=holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 00:05:01.512 04:45:08 -- setup/devices.sh@50 -- # local mount_point= 00:05:01.512 04:45:08 -- setup/devices.sh@51 -- # local test_file= 00:05:01.512 04:45:08 -- setup/devices.sh@53 -- # local found=0 00:05:01.512 04:45:08 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:01.512 04:45:08 -- setup/devices.sh@59 -- # local pci status 00:05:01.512 04:45:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.512 04:45:08 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:05:01.512 04:45:08 -- setup/devices.sh@47 -- # setup output config 00:05:01.512 04:45:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:01.512 04:45:08 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:01.512 04:45:08 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:01.512 04:45:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.771 04:45:08 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:01.771 04:45:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.038 04:45:09 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:02.038 04:45:09 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\2\:\d\m\-\0* ]] 00:05:02.038 04:45:09 -- setup/devices.sh@63 -- # found=1 00:05:02.038 04:45:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.038 04:45:09 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:02.038 04:45:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.312 lsblk: /dev/nvme0c0n1: not a block device 00:05:02.312 04:45:09 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:02.312 04:45:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.312 04:45:09 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:02.312 04:45:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.312 04:45:09 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:02.312 04:45:09 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:02.312 04:45:09 -- setup/devices.sh@68 -- # return 0 00:05:02.312 04:45:09 -- setup/devices.sh@187 -- # cleanup_dm 00:05:02.312 04:45:09 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:02.312 04:45:09 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:02.312 04:45:09 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:02.574 04:45:09 -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:05:02.575 04:45:09 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme1n1p1 00:05:02.575 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:02.575 04:45:09 -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:05:02.575 04:45:09 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme1n1p2 00:05:02.575 00:05:02.575 real 0m5.032s 00:05:02.575 user 0m0.849s 00:05:02.575 sys 0m1.136s 00:05:02.575 04:45:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.575 ************************************ 00:05:02.575 END TEST dm_mount 00:05:02.575 ************************************ 00:05:02.575 04:45:09 -- common/autotest_common.sh@10 -- # set +x 00:05:02.575 04:45:09 -- setup/devices.sh@1 -- # cleanup 00:05:02.575 04:45:09 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:02.575 04:45:09 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:02.575 04:45:09 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:05:02.575 04:45:09 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:05:02.575 04:45:09 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:05:02.575 04:45:09 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:05:02.834 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:02.834 /dev/nvme1n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:02.834 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:02.834 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:05:02.834 04:45:09 -- setup/devices.sh@12 -- # cleanup_dm 00:05:02.834 04:45:09 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:02.834 04:45:09 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:02.834 04:45:09 -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:05:02.834 04:45:09 -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:05:02.834 04:45:09 -- setup/devices.sh@14 -- # [[ -b /dev/nvme1n1 ]] 00:05:02.834 04:45:09 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme1n1 00:05:02.834 00:05:02.834 real 0m12.424s 00:05:02.834 user 0m3.117s 00:05:02.834 sys 0m3.657s 00:05:02.834 04:45:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.834 ************************************ 00:05:02.834 04:45:09 -- common/autotest_common.sh@10 -- # set +x 00:05:02.834 END TEST devices 00:05:02.834 ************************************ 00:05:02.834 00:05:02.834 real 0m43.998s 00:05:02.834 user 0m10.188s 00:05:02.834 sys 0m13.771s 00:05:02.834 04:45:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.834 04:45:09 -- common/autotest_common.sh@10 -- # set +x 00:05:02.834 ************************************ 00:05:02.834 END TEST setup.sh 00:05:02.834 ************************************ 00:05:02.834 04:45:09 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:03.093 Hugepages 00:05:03.093 node hugesize free / total 00:05:03.093 node0 1048576kB 0 / 0 00:05:03.093 node0 2048kB 2048 / 2048 00:05:03.093 00:05:03.093 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:03.093 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:03.352 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:05:03.352 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:03.352 NVMe 0000:00:08.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:03.610 NVMe 0000:00:09.0 1b36 0010 unknown nvme nvme0 nvme0c0n1 00:05:03.610 04:45:10 -- spdk/autotest.sh@141 -- # uname -s 00:05:03.610 04:45:10 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:05:03.610 04:45:10 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:05:03.610 04:45:10 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:04.546 lsblk: /dev/nvme0c0n1: not a block device 00:05:04.546 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:04.546 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:04.805 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:04.805 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:05:04.805 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:05:04.805 04:45:11 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:05.746 04:45:12 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:05.746 04:45:12 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:05.746 04:45:12 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:05:05.746 04:45:12 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:05:05.746 04:45:12 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:05.746 04:45:12 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:05.746 04:45:12 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:06.003 04:45:12 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:06.003 04:45:12 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:06.003 04:45:12 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:06.003 04:45:12 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:05:06.003 04:45:12 -- common/autotest_common.sh@1521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:06.569 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:06.569 Waiting for block devices as requested 00:05:06.569 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:05:06.569 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:05:06.827 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:05:06.827 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:05:12.097 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:05:12.097 04:45:18 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:12.097 04:45:18 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:05:12.097 04:45:18 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:12.097 04:45:18 -- common/autotest_common.sh@1487 -- # grep 0000:00:06.0/nvme/nvme 00:05:12.097 04:45:18 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 00:05:12.097 04:45:18 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 ]] 00:05:12.097 04:45:18 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 00:05:12.097 04:45:18 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:05:12.097 04:45:18 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme2 00:05:12.097 04:45:18 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme2 ]] 00:05:12.097 04:45:18 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme2 00:05:12.097 04:45:18 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:12.097 04:45:18 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:12.097 04:45:18 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:05:12.097 04:45:18 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:12.097 04:45:18 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:12.097 04:45:18 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme2 00:05:12.097 04:45:18 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:12.097 04:45:18 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:12.097 04:45:18 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:12.097 04:45:18 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:12.097 04:45:18 -- common/autotest_common.sh@1542 -- # continue 00:05:12.097 04:45:18 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:12.097 04:45:18 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:05:12.097 04:45:18 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:12.097 04:45:18 -- common/autotest_common.sh@1487 -- # grep 0000:00:07.0/nvme/nvme 00:05:12.097 04:45:18 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 00:05:12.097 04:45:18 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 ]] 00:05:12.097 04:45:18 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 00:05:12.097 04:45:18 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:05:12.097 04:45:18 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme3 00:05:12.097 04:45:18 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme3 ]] 00:05:12.097 04:45:18 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:12.097 04:45:18 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme3 00:05:12.097 04:45:18 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:12.097 04:45:18 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:05:12.097 04:45:18 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:12.097 04:45:18 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:12.097 04:45:18 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme3 00:05:12.097 04:45:18 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:12.097 04:45:18 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:12.097 04:45:18 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:12.097 04:45:18 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:12.097 04:45:18 -- common/autotest_common.sh@1542 -- # continue 00:05:12.097 04:45:18 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:12.097 04:45:18 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:08.0 00:05:12.097 04:45:18 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:12.097 04:45:18 -- common/autotest_common.sh@1487 -- # grep 0000:00:08.0/nvme/nvme 00:05:12.097 04:45:18 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 00:05:12.097 04:45:18 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 ]] 00:05:12.097 04:45:18 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 00:05:12.097 04:45:18 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:12.097 04:45:18 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme1 00:05:12.097 04:45:18 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme1 ]] 00:05:12.097 04:45:18 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme1 00:05:12.097 04:45:18 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:12.097 04:45:18 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:12.097 04:45:18 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:05:12.097 04:45:18 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:12.097 04:45:18 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:12.097 04:45:18 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme1 00:05:12.097 04:45:18 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:12.097 04:45:18 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:12.097 04:45:18 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:12.097 04:45:18 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:12.097 04:45:18 -- common/autotest_common.sh@1542 -- # continue 00:05:12.097 04:45:18 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:12.097 04:45:18 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:09.0 00:05:12.097 04:45:18 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:12.097 04:45:18 -- common/autotest_common.sh@1487 -- # grep 0000:00:09.0/nvme/nvme 00:05:12.097 04:45:18 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 00:05:12.097 04:45:18 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 ]] 00:05:12.097 04:45:18 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 00:05:12.097 04:45:18 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:12.097 04:45:18 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:05:12.097 04:45:18 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:05:12.097 04:45:18 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:05:12.097 04:45:18 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:12.097 04:45:18 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:12.097 04:45:18 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:05:12.097 04:45:18 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:12.097 04:45:18 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:12.097 04:45:18 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:12.097 04:45:18 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:05:12.097 04:45:18 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:12.097 04:45:18 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:12.097 04:45:18 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:12.097 04:45:18 -- common/autotest_common.sh@1542 -- # continue 00:05:12.097 04:45:18 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:05:12.097 04:45:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:12.097 04:45:18 -- common/autotest_common.sh@10 -- # set +x 00:05:12.097 04:45:19 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:05:12.097 04:45:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:12.097 04:45:19 -- common/autotest_common.sh@10 -- # set +x 00:05:12.097 04:45:19 -- spdk/autotest.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:13.033 lsblk: /dev/nvme0c0n1: not a block device 00:05:13.033 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:13.291 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:13.291 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:05:13.291 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:13.291 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:05:13.291 04:45:20 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:05:13.291 04:45:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:13.291 04:45:20 -- common/autotest_common.sh@10 -- # set +x 00:05:13.291 04:45:20 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:05:13.291 04:45:20 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:13.291 04:45:20 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:13.291 04:45:20 -- common/autotest_common.sh@1562 -- # bdfs=() 00:05:13.291 04:45:20 -- common/autotest_common.sh@1562 -- # local bdfs 00:05:13.291 04:45:20 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:13.291 04:45:20 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:13.291 04:45:20 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:13.291 04:45:20 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:13.291 04:45:20 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:13.291 04:45:20 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:13.550 04:45:20 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:13.550 04:45:20 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:05:13.550 04:45:20 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:13.550 04:45:20 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:05:13.550 04:45:20 -- common/autotest_common.sh@1565 -- # device=0x0010 00:05:13.550 04:45:20 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:13.550 04:45:20 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:13.550 04:45:20 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:05:13.550 04:45:20 -- common/autotest_common.sh@1565 -- # device=0x0010 00:05:13.550 04:45:20 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:13.550 04:45:20 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:13.550 04:45:20 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:08.0/device 00:05:13.550 04:45:20 -- common/autotest_common.sh@1565 -- # device=0x0010 00:05:13.550 04:45:20 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:13.550 04:45:20 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:13.550 04:45:20 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:09.0/device 00:05:13.550 04:45:20 -- common/autotest_common.sh@1565 -- # device=0x0010 00:05:13.550 04:45:20 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:13.550 04:45:20 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:05:13.550 04:45:20 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:13.550 04:45:20 -- common/autotest_common.sh@1578 -- # return 0 00:05:13.550 04:45:20 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:05:13.550 04:45:20 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:05:13.550 04:45:20 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:13.550 04:45:20 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:13.550 04:45:20 -- spdk/autotest.sh@173 -- # timing_enter lib 00:05:13.550 04:45:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:13.550 04:45:20 -- common/autotest_common.sh@10 -- # set +x 00:05:13.550 04:45:20 -- spdk/autotest.sh@175 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:13.550 04:45:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:13.550 04:45:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:13.550 04:45:20 -- common/autotest_common.sh@10 -- # set +x 00:05:13.550 ************************************ 00:05:13.550 START TEST env 00:05:13.550 ************************************ 00:05:13.550 04:45:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:13.550 * Looking for test storage... 00:05:13.550 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:13.550 04:45:20 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:13.550 04:45:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:13.550 04:45:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:13.550 04:45:20 -- common/autotest_common.sh@10 -- # set +x 00:05:13.550 ************************************ 00:05:13.550 START TEST env_memory 00:05:13.550 ************************************ 00:05:13.550 04:45:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:13.550 00:05:13.550 00:05:13.550 CUnit - A unit testing framework for C - Version 2.1-3 00:05:13.550 http://cunit.sourceforge.net/ 00:05:13.550 00:05:13.550 00:05:13.550 Suite: memory 00:05:13.550 Test: alloc and free memory map ...[2024-05-12 04:45:20.632419] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:13.809 passed 00:05:13.809 Test: mem map translation ...[2024-05-12 04:45:20.693498] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:13.809 [2024-05-12 04:45:20.693624] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:13.809 [2024-05-12 04:45:20.693736] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:13.809 [2024-05-12 04:45:20.693768] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:13.809 passed 00:05:13.809 Test: mem map registration ...[2024-05-12 04:45:20.792465] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:13.809 [2024-05-12 04:45:20.792555] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:13.809 passed 00:05:13.809 Test: mem map adjacent registrations ...passed 00:05:13.809 00:05:13.809 Run Summary: Type Total Ran Passed Failed Inactive 00:05:13.809 suites 1 1 n/a 0 0 00:05:13.809 tests 4 4 4 0 0 00:05:13.809 asserts 152 152 152 0 n/a 00:05:13.809 00:05:13.809 Elapsed time = 0.345 seconds 00:05:14.068 00:05:14.068 real 0m0.380s 00:05:14.068 user 0m0.351s 00:05:14.068 sys 0m0.027s 00:05:14.068 04:45:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.068 04:45:20 -- common/autotest_common.sh@10 -- # set +x 00:05:14.068 ************************************ 00:05:14.068 END TEST env_memory 00:05:14.068 ************************************ 00:05:14.068 04:45:20 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:14.068 04:45:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:14.068 04:45:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:14.068 04:45:20 -- common/autotest_common.sh@10 -- # set +x 00:05:14.068 ************************************ 00:05:14.068 START TEST env_vtophys 00:05:14.068 ************************************ 00:05:14.068 04:45:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:14.068 EAL: lib.eal log level changed from notice to debug 00:05:14.068 EAL: Detected lcore 0 as core 0 on socket 0 00:05:14.068 EAL: Detected lcore 1 as core 0 on socket 0 00:05:14.068 EAL: Detected lcore 2 as core 0 on socket 0 00:05:14.068 EAL: Detected lcore 3 as core 0 on socket 0 00:05:14.068 EAL: Detected lcore 4 as core 0 on socket 0 00:05:14.068 EAL: Detected lcore 5 as core 0 on socket 0 00:05:14.068 EAL: Detected lcore 6 as core 0 on socket 0 00:05:14.068 EAL: Detected lcore 7 as core 0 on socket 0 00:05:14.068 EAL: Detected lcore 8 as core 0 on socket 0 00:05:14.068 EAL: Detected lcore 9 as core 0 on socket 0 00:05:14.068 EAL: Maximum logical cores by configuration: 128 00:05:14.068 EAL: Detected CPU lcores: 10 00:05:14.068 EAL: Detected NUMA nodes: 1 00:05:14.068 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:14.068 EAL: Detected shared linkage of DPDK 00:05:14.068 EAL: No shared files mode enabled, IPC will be disabled 00:05:14.068 EAL: Selected IOVA mode 'PA' 00:05:14.068 EAL: Probing VFIO support... 00:05:14.068 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:14.068 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:14.068 EAL: Ask a virtual area of 0x2e000 bytes 00:05:14.068 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:14.068 EAL: Setting up physically contiguous memory... 00:05:14.068 EAL: Setting maximum number of open files to 524288 00:05:14.068 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:14.068 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:14.068 EAL: Ask a virtual area of 0x61000 bytes 00:05:14.068 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:14.068 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:14.068 EAL: Ask a virtual area of 0x400000000 bytes 00:05:14.068 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:14.068 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:14.068 EAL: Ask a virtual area of 0x61000 bytes 00:05:14.068 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:14.068 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:14.068 EAL: Ask a virtual area of 0x400000000 bytes 00:05:14.068 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:14.068 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:14.068 EAL: Ask a virtual area of 0x61000 bytes 00:05:14.068 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:14.068 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:14.068 EAL: Ask a virtual area of 0x400000000 bytes 00:05:14.068 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:14.068 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:14.068 EAL: Ask a virtual area of 0x61000 bytes 00:05:14.068 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:14.068 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:14.068 EAL: Ask a virtual area of 0x400000000 bytes 00:05:14.068 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:14.068 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:14.068 EAL: Hugepages will be freed exactly as allocated. 00:05:14.068 EAL: No shared files mode enabled, IPC is disabled 00:05:14.068 EAL: No shared files mode enabled, IPC is disabled 00:05:14.068 EAL: TSC frequency is ~2200000 KHz 00:05:14.327 EAL: Main lcore 0 is ready (tid=7f57a0f14a40;cpuset=[0]) 00:05:14.327 EAL: Trying to obtain current memory policy. 00:05:14.327 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.327 EAL: Restoring previous memory policy: 0 00:05:14.327 EAL: request: mp_malloc_sync 00:05:14.327 EAL: No shared files mode enabled, IPC is disabled 00:05:14.327 EAL: Heap on socket 0 was expanded by 2MB 00:05:14.327 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:14.327 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:14.327 EAL: Mem event callback 'spdk:(nil)' registered 00:05:14.327 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:14.327 00:05:14.327 00:05:14.327 CUnit - A unit testing framework for C - Version 2.1-3 00:05:14.327 http://cunit.sourceforge.net/ 00:05:14.327 00:05:14.327 00:05:14.327 Suite: components_suite 00:05:14.586 Test: vtophys_malloc_test ...passed 00:05:14.586 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:14.586 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.586 EAL: Restoring previous memory policy: 4 00:05:14.586 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.586 EAL: request: mp_malloc_sync 00:05:14.586 EAL: No shared files mode enabled, IPC is disabled 00:05:14.586 EAL: Heap on socket 0 was expanded by 4MB 00:05:14.586 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.586 EAL: request: mp_malloc_sync 00:05:14.586 EAL: No shared files mode enabled, IPC is disabled 00:05:14.586 EAL: Heap on socket 0 was shrunk by 4MB 00:05:14.586 EAL: Trying to obtain current memory policy. 00:05:14.586 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.586 EAL: Restoring previous memory policy: 4 00:05:14.586 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.586 EAL: request: mp_malloc_sync 00:05:14.586 EAL: No shared files mode enabled, IPC is disabled 00:05:14.586 EAL: Heap on socket 0 was expanded by 6MB 00:05:14.586 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.586 EAL: request: mp_malloc_sync 00:05:14.586 EAL: No shared files mode enabled, IPC is disabled 00:05:14.586 EAL: Heap on socket 0 was shrunk by 6MB 00:05:14.586 EAL: Trying to obtain current memory policy. 00:05:14.586 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.586 EAL: Restoring previous memory policy: 4 00:05:14.586 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.586 EAL: request: mp_malloc_sync 00:05:14.586 EAL: No shared files mode enabled, IPC is disabled 00:05:14.586 EAL: Heap on socket 0 was expanded by 10MB 00:05:14.586 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.586 EAL: request: mp_malloc_sync 00:05:14.586 EAL: No shared files mode enabled, IPC is disabled 00:05:14.586 EAL: Heap on socket 0 was shrunk by 10MB 00:05:14.586 EAL: Trying to obtain current memory policy. 00:05:14.586 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.586 EAL: Restoring previous memory policy: 4 00:05:14.586 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.586 EAL: request: mp_malloc_sync 00:05:14.586 EAL: No shared files mode enabled, IPC is disabled 00:05:14.586 EAL: Heap on socket 0 was expanded by 18MB 00:05:14.586 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.586 EAL: request: mp_malloc_sync 00:05:14.586 EAL: No shared files mode enabled, IPC is disabled 00:05:14.586 EAL: Heap on socket 0 was shrunk by 18MB 00:05:14.586 EAL: Trying to obtain current memory policy. 00:05:14.586 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.586 EAL: Restoring previous memory policy: 4 00:05:14.586 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.586 EAL: request: mp_malloc_sync 00:05:14.586 EAL: No shared files mode enabled, IPC is disabled 00:05:14.586 EAL: Heap on socket 0 was expanded by 34MB 00:05:14.586 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.586 EAL: request: mp_malloc_sync 00:05:14.586 EAL: No shared files mode enabled, IPC is disabled 00:05:14.586 EAL: Heap on socket 0 was shrunk by 34MB 00:05:14.845 EAL: Trying to obtain current memory policy. 00:05:14.845 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.845 EAL: Restoring previous memory policy: 4 00:05:14.845 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.845 EAL: request: mp_malloc_sync 00:05:14.845 EAL: No shared files mode enabled, IPC is disabled 00:05:14.845 EAL: Heap on socket 0 was expanded by 66MB 00:05:14.845 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.845 EAL: request: mp_malloc_sync 00:05:14.845 EAL: No shared files mode enabled, IPC is disabled 00:05:14.845 EAL: Heap on socket 0 was shrunk by 66MB 00:05:14.845 EAL: Trying to obtain current memory policy. 00:05:14.845 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.845 EAL: Restoring previous memory policy: 4 00:05:14.845 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.845 EAL: request: mp_malloc_sync 00:05:14.845 EAL: No shared files mode enabled, IPC is disabled 00:05:14.845 EAL: Heap on socket 0 was expanded by 130MB 00:05:15.104 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.104 EAL: request: mp_malloc_sync 00:05:15.104 EAL: No shared files mode enabled, IPC is disabled 00:05:15.104 EAL: Heap on socket 0 was shrunk by 130MB 00:05:15.363 EAL: Trying to obtain current memory policy. 00:05:15.363 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.363 EAL: Restoring previous memory policy: 4 00:05:15.363 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.363 EAL: request: mp_malloc_sync 00:05:15.363 EAL: No shared files mode enabled, IPC is disabled 00:05:15.363 EAL: Heap on socket 0 was expanded by 258MB 00:05:15.620 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.620 EAL: request: mp_malloc_sync 00:05:15.620 EAL: No shared files mode enabled, IPC is disabled 00:05:15.620 EAL: Heap on socket 0 was shrunk by 258MB 00:05:15.879 EAL: Trying to obtain current memory policy. 00:05:15.879 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:16.137 EAL: Restoring previous memory policy: 4 00:05:16.137 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.137 EAL: request: mp_malloc_sync 00:05:16.137 EAL: No shared files mode enabled, IPC is disabled 00:05:16.137 EAL: Heap on socket 0 was expanded by 514MB 00:05:16.704 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.704 EAL: request: mp_malloc_sync 00:05:16.704 EAL: No shared files mode enabled, IPC is disabled 00:05:16.704 EAL: Heap on socket 0 was shrunk by 514MB 00:05:17.297 EAL: Trying to obtain current memory policy. 00:05:17.297 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.556 EAL: Restoring previous memory policy: 4 00:05:17.556 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.556 EAL: request: mp_malloc_sync 00:05:17.556 EAL: No shared files mode enabled, IPC is disabled 00:05:17.556 EAL: Heap on socket 0 was expanded by 1026MB 00:05:18.934 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.934 EAL: request: mp_malloc_sync 00:05:18.934 EAL: No shared files mode enabled, IPC is disabled 00:05:18.934 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:20.313 passed 00:05:20.313 00:05:20.313 Run Summary: Type Total Ran Passed Failed Inactive 00:05:20.313 suites 1 1 n/a 0 0 00:05:20.313 tests 2 2 2 0 0 00:05:20.313 asserts 5355 5355 5355 0 n/a 00:05:20.313 00:05:20.313 Elapsed time = 5.862 seconds 00:05:20.313 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.313 EAL: request: mp_malloc_sync 00:05:20.313 EAL: No shared files mode enabled, IPC is disabled 00:05:20.313 EAL: Heap on socket 0 was shrunk by 2MB 00:05:20.313 EAL: No shared files mode enabled, IPC is disabled 00:05:20.313 EAL: No shared files mode enabled, IPC is disabled 00:05:20.313 EAL: No shared files mode enabled, IPC is disabled 00:05:20.313 00:05:20.313 real 0m6.172s 00:05:20.313 user 0m5.388s 00:05:20.313 sys 0m0.633s 00:05:20.313 04:45:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.313 ************************************ 00:05:20.313 END TEST env_vtophys 00:05:20.313 04:45:27 -- common/autotest_common.sh@10 -- # set +x 00:05:20.313 ************************************ 00:05:20.313 04:45:27 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:20.313 04:45:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:20.313 04:45:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:20.313 04:45:27 -- common/autotest_common.sh@10 -- # set +x 00:05:20.313 ************************************ 00:05:20.313 START TEST env_pci 00:05:20.313 ************************************ 00:05:20.313 04:45:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:20.313 00:05:20.313 00:05:20.313 CUnit - A unit testing framework for C - Version 2.1-3 00:05:20.313 http://cunit.sourceforge.net/ 00:05:20.313 00:05:20.313 00:05:20.313 Suite: pci 00:05:20.313 Test: pci_hook ...[2024-05-12 04:45:27.264903] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56795 has claimed it 00:05:20.313 passed 00:05:20.313 00:05:20.313 EAL: Cannot find device (10000:00:01.0) 00:05:20.313 EAL: Failed to attach device on primary process 00:05:20.313 Run Summary: Type Total Ran Passed Failed Inactive 00:05:20.313 suites 1 1 n/a 0 0 00:05:20.313 tests 1 1 1 0 0 00:05:20.313 asserts 25 25 25 0 n/a 00:05:20.313 00:05:20.313 Elapsed time = 0.009 seconds 00:05:20.313 00:05:20.313 real 0m0.086s 00:05:20.313 user 0m0.044s 00:05:20.313 sys 0m0.042s 00:05:20.313 04:45:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.313 04:45:27 -- common/autotest_common.sh@10 -- # set +x 00:05:20.313 ************************************ 00:05:20.313 END TEST env_pci 00:05:20.313 ************************************ 00:05:20.313 04:45:27 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:20.313 04:45:27 -- env/env.sh@15 -- # uname 00:05:20.313 04:45:27 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:20.313 04:45:27 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:20.313 04:45:27 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:20.313 04:45:27 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:05:20.313 04:45:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:20.313 04:45:27 -- common/autotest_common.sh@10 -- # set +x 00:05:20.313 ************************************ 00:05:20.313 START TEST env_dpdk_post_init 00:05:20.313 ************************************ 00:05:20.313 04:45:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:20.574 EAL: Detected CPU lcores: 10 00:05:20.574 EAL: Detected NUMA nodes: 1 00:05:20.574 EAL: Detected shared linkage of DPDK 00:05:20.574 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:20.574 EAL: Selected IOVA mode 'PA' 00:05:20.574 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:20.574 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:05:20.574 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:05:20.574 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:08.0 (socket -1) 00:05:20.574 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:09.0 (socket -1) 00:05:20.574 Starting DPDK initialization... 00:05:20.574 Starting SPDK post initialization... 00:05:20.574 SPDK NVMe probe 00:05:20.574 Attaching to 0000:00:06.0 00:05:20.574 Attaching to 0000:00:07.0 00:05:20.574 Attaching to 0000:00:08.0 00:05:20.574 Attaching to 0000:00:09.0 00:05:20.574 Attached to 0000:00:06.0 00:05:20.574 Attached to 0000:00:07.0 00:05:20.574 Attached to 0000:00:09.0 00:05:20.574 Attached to 0000:00:08.0 00:05:20.574 Cleaning up... 00:05:20.574 00:05:20.574 real 0m0.285s 00:05:20.574 user 0m0.092s 00:05:20.574 sys 0m0.096s 00:05:20.574 04:45:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.574 04:45:27 -- common/autotest_common.sh@10 -- # set +x 00:05:20.574 ************************************ 00:05:20.574 END TEST env_dpdk_post_init 00:05:20.574 ************************************ 00:05:20.833 04:45:27 -- env/env.sh@26 -- # uname 00:05:20.833 04:45:27 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:20.833 04:45:27 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:20.833 04:45:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:20.833 04:45:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:20.833 04:45:27 -- common/autotest_common.sh@10 -- # set +x 00:05:20.833 ************************************ 00:05:20.833 START TEST env_mem_callbacks 00:05:20.833 ************************************ 00:05:20.833 04:45:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:20.833 EAL: Detected CPU lcores: 10 00:05:20.833 EAL: Detected NUMA nodes: 1 00:05:20.833 EAL: Detected shared linkage of DPDK 00:05:20.833 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:20.833 EAL: Selected IOVA mode 'PA' 00:05:20.833 00:05:20.833 00:05:20.833 CUnit - A unit testing framework for C - Version 2.1-3 00:05:20.833 http://cunit.sourceforge.net/ 00:05:20.833 00:05:20.833 00:05:20.833 Suite: memory 00:05:20.833 Test: test ... 00:05:20.833 register 0x200000200000 2097152 00:05:20.833 malloc 3145728 00:05:20.833 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:20.833 register 0x200000400000 4194304 00:05:20.833 buf 0x2000004fffc0 len 3145728 PASSED 00:05:20.833 malloc 64 00:05:20.833 buf 0x2000004ffec0 len 64 PASSED 00:05:20.833 malloc 4194304 00:05:20.833 register 0x200000800000 6291456 00:05:20.833 buf 0x2000009fffc0 len 4194304 PASSED 00:05:20.833 free 0x2000004fffc0 3145728 00:05:20.833 free 0x2000004ffec0 64 00:05:20.833 unregister 0x200000400000 4194304 PASSED 00:05:20.833 free 0x2000009fffc0 4194304 00:05:20.833 unregister 0x200000800000 6291456 PASSED 00:05:20.833 malloc 8388608 00:05:20.833 register 0x200000400000 10485760 00:05:20.833 buf 0x2000005fffc0 len 8388608 PASSED 00:05:20.833 free 0x2000005fffc0 8388608 00:05:20.833 unregister 0x200000400000 10485760 PASSED 00:05:21.092 passed 00:05:21.092 00:05:21.092 Run Summary: Type Total Ran Passed Failed Inactive 00:05:21.093 suites 1 1 n/a 0 0 00:05:21.093 tests 1 1 1 0 0 00:05:21.093 asserts 15 15 15 0 n/a 00:05:21.093 00:05:21.093 Elapsed time = 0.060 seconds 00:05:21.093 00:05:21.093 real 0m0.266s 00:05:21.093 user 0m0.097s 00:05:21.093 sys 0m0.067s 00:05:21.093 04:45:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.093 ************************************ 00:05:21.093 END TEST env_mem_callbacks 00:05:21.093 04:45:27 -- common/autotest_common.sh@10 -- # set +x 00:05:21.093 ************************************ 00:05:21.093 00:05:21.093 real 0m7.541s 00:05:21.093 user 0m6.078s 00:05:21.093 sys 0m1.086s 00:05:21.093 04:45:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.093 04:45:28 -- common/autotest_common.sh@10 -- # set +x 00:05:21.093 ************************************ 00:05:21.093 END TEST env 00:05:21.093 ************************************ 00:05:21.093 04:45:28 -- spdk/autotest.sh@176 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:21.093 04:45:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:21.093 04:45:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:21.093 04:45:28 -- common/autotest_common.sh@10 -- # set +x 00:05:21.093 ************************************ 00:05:21.093 START TEST rpc 00:05:21.093 ************************************ 00:05:21.093 04:45:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:21.093 * Looking for test storage... 00:05:21.093 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:21.093 04:45:28 -- rpc/rpc.sh@65 -- # spdk_pid=56913 00:05:21.093 04:45:28 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:21.093 04:45:28 -- rpc/rpc.sh@67 -- # waitforlisten 56913 00:05:21.093 04:45:28 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:21.093 04:45:28 -- common/autotest_common.sh@819 -- # '[' -z 56913 ']' 00:05:21.093 04:45:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.093 04:45:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:21.093 04:45:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.093 04:45:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:21.093 04:45:28 -- common/autotest_common.sh@10 -- # set +x 00:05:21.352 [2024-05-12 04:45:28.273701] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:21.352 [2024-05-12 04:45:28.273895] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56913 ] 00:05:21.352 [2024-05-12 04:45:28.446328] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.611 [2024-05-12 04:45:28.596483] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:21.611 [2024-05-12 04:45:28.596729] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:21.611 [2024-05-12 04:45:28.596756] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56913' to capture a snapshot of events at runtime. 00:05:21.611 [2024-05-12 04:45:28.596769] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56913 for offline analysis/debug. 00:05:21.611 [2024-05-12 04:45:28.596804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.987 04:45:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:22.987 04:45:29 -- common/autotest_common.sh@852 -- # return 0 00:05:22.987 04:45:29 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:22.987 04:45:29 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:22.987 04:45:29 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:22.987 04:45:29 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:22.987 04:45:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:22.987 04:45:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:22.987 04:45:29 -- common/autotest_common.sh@10 -- # set +x 00:05:22.987 ************************************ 00:05:22.987 START TEST rpc_integrity 00:05:22.987 ************************************ 00:05:22.987 04:45:29 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:22.987 04:45:29 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:22.987 04:45:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:22.987 04:45:29 -- common/autotest_common.sh@10 -- # set +x 00:05:22.987 04:45:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:22.987 04:45:29 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:22.987 04:45:29 -- rpc/rpc.sh@13 -- # jq length 00:05:22.987 04:45:30 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:22.987 04:45:30 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:22.987 04:45:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:22.987 04:45:30 -- common/autotest_common.sh@10 -- # set +x 00:05:22.987 04:45:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:22.987 04:45:30 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:22.987 04:45:30 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:22.987 04:45:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:22.987 04:45:30 -- common/autotest_common.sh@10 -- # set +x 00:05:22.987 04:45:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:22.987 04:45:30 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:22.987 { 00:05:22.987 "name": "Malloc0", 00:05:22.987 "aliases": [ 00:05:22.988 "a7987d21-ac02-405a-a142-7245e6771ec7" 00:05:22.988 ], 00:05:22.988 "product_name": "Malloc disk", 00:05:22.988 "block_size": 512, 00:05:22.988 "num_blocks": 16384, 00:05:22.988 "uuid": "a7987d21-ac02-405a-a142-7245e6771ec7", 00:05:22.988 "assigned_rate_limits": { 00:05:22.988 "rw_ios_per_sec": 0, 00:05:22.988 "rw_mbytes_per_sec": 0, 00:05:22.988 "r_mbytes_per_sec": 0, 00:05:22.988 "w_mbytes_per_sec": 0 00:05:22.988 }, 00:05:22.988 "claimed": false, 00:05:22.988 "zoned": false, 00:05:22.988 "supported_io_types": { 00:05:22.988 "read": true, 00:05:22.988 "write": true, 00:05:22.988 "unmap": true, 00:05:22.988 "write_zeroes": true, 00:05:22.988 "flush": true, 00:05:22.988 "reset": true, 00:05:22.988 "compare": false, 00:05:22.988 "compare_and_write": false, 00:05:22.988 "abort": true, 00:05:22.988 "nvme_admin": false, 00:05:22.988 "nvme_io": false 00:05:22.988 }, 00:05:22.988 "memory_domains": [ 00:05:22.988 { 00:05:22.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:22.988 "dma_device_type": 2 00:05:22.988 } 00:05:22.988 ], 00:05:22.988 "driver_specific": {} 00:05:22.988 } 00:05:22.988 ]' 00:05:22.988 04:45:30 -- rpc/rpc.sh@17 -- # jq length 00:05:22.988 04:45:30 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:22.988 04:45:30 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:22.988 04:45:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:22.988 04:45:30 -- common/autotest_common.sh@10 -- # set +x 00:05:22.988 [2024-05-12 04:45:30.105170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:22.988 [2024-05-12 04:45:30.105297] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:22.988 [2024-05-12 04:45:30.105339] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:05:22.988 [2024-05-12 04:45:30.105358] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:22.988 [2024-05-12 04:45:30.108065] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:22.988 [2024-05-12 04:45:30.108139] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:22.988 Passthru0 00:05:22.988 04:45:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:22.988 04:45:30 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:22.988 04:45:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:22.988 04:45:30 -- common/autotest_common.sh@10 -- # set +x 00:05:23.247 04:45:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:23.247 04:45:30 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:23.247 { 00:05:23.247 "name": "Malloc0", 00:05:23.247 "aliases": [ 00:05:23.247 "a7987d21-ac02-405a-a142-7245e6771ec7" 00:05:23.247 ], 00:05:23.247 "product_name": "Malloc disk", 00:05:23.247 "block_size": 512, 00:05:23.247 "num_blocks": 16384, 00:05:23.247 "uuid": "a7987d21-ac02-405a-a142-7245e6771ec7", 00:05:23.247 "assigned_rate_limits": { 00:05:23.247 "rw_ios_per_sec": 0, 00:05:23.247 "rw_mbytes_per_sec": 0, 00:05:23.247 "r_mbytes_per_sec": 0, 00:05:23.247 "w_mbytes_per_sec": 0 00:05:23.247 }, 00:05:23.247 "claimed": true, 00:05:23.247 "claim_type": "exclusive_write", 00:05:23.247 "zoned": false, 00:05:23.247 "supported_io_types": { 00:05:23.247 "read": true, 00:05:23.247 "write": true, 00:05:23.247 "unmap": true, 00:05:23.247 "write_zeroes": true, 00:05:23.247 "flush": true, 00:05:23.247 "reset": true, 00:05:23.247 "compare": false, 00:05:23.247 "compare_and_write": false, 00:05:23.247 "abort": true, 00:05:23.247 "nvme_admin": false, 00:05:23.247 "nvme_io": false 00:05:23.247 }, 00:05:23.247 "memory_domains": [ 00:05:23.247 { 00:05:23.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:23.247 "dma_device_type": 2 00:05:23.247 } 00:05:23.247 ], 00:05:23.247 "driver_specific": {} 00:05:23.247 }, 00:05:23.247 { 00:05:23.247 "name": "Passthru0", 00:05:23.247 "aliases": [ 00:05:23.247 "9ceed1ba-1a4d-5ae7-84a1-776120814d46" 00:05:23.247 ], 00:05:23.247 "product_name": "passthru", 00:05:23.247 "block_size": 512, 00:05:23.247 "num_blocks": 16384, 00:05:23.247 "uuid": "9ceed1ba-1a4d-5ae7-84a1-776120814d46", 00:05:23.247 "assigned_rate_limits": { 00:05:23.247 "rw_ios_per_sec": 0, 00:05:23.247 "rw_mbytes_per_sec": 0, 00:05:23.247 "r_mbytes_per_sec": 0, 00:05:23.247 "w_mbytes_per_sec": 0 00:05:23.247 }, 00:05:23.247 "claimed": false, 00:05:23.247 "zoned": false, 00:05:23.247 "supported_io_types": { 00:05:23.247 "read": true, 00:05:23.247 "write": true, 00:05:23.247 "unmap": true, 00:05:23.247 "write_zeroes": true, 00:05:23.247 "flush": true, 00:05:23.247 "reset": true, 00:05:23.247 "compare": false, 00:05:23.247 "compare_and_write": false, 00:05:23.247 "abort": true, 00:05:23.247 "nvme_admin": false, 00:05:23.247 "nvme_io": false 00:05:23.247 }, 00:05:23.247 "memory_domains": [ 00:05:23.247 { 00:05:23.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:23.247 "dma_device_type": 2 00:05:23.247 } 00:05:23.247 ], 00:05:23.247 "driver_specific": { 00:05:23.247 "passthru": { 00:05:23.247 "name": "Passthru0", 00:05:23.247 "base_bdev_name": "Malloc0" 00:05:23.247 } 00:05:23.247 } 00:05:23.247 } 00:05:23.247 ]' 00:05:23.247 04:45:30 -- rpc/rpc.sh@21 -- # jq length 00:05:23.247 04:45:30 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:23.247 04:45:30 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:23.247 04:45:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:23.247 04:45:30 -- common/autotest_common.sh@10 -- # set +x 00:05:23.247 04:45:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:23.247 04:45:30 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:23.247 04:45:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:23.247 04:45:30 -- common/autotest_common.sh@10 -- # set +x 00:05:23.247 04:45:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:23.247 04:45:30 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:23.247 04:45:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:23.247 04:45:30 -- common/autotest_common.sh@10 -- # set +x 00:05:23.247 04:45:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:23.247 04:45:30 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:23.247 04:45:30 -- rpc/rpc.sh@26 -- # jq length 00:05:23.247 04:45:30 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:23.247 00:05:23.247 real 0m0.355s 00:05:23.247 user 0m0.220s 00:05:23.247 sys 0m0.039s 00:05:23.247 ************************************ 00:05:23.247 END TEST rpc_integrity 00:05:23.247 ************************************ 00:05:23.247 04:45:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.247 04:45:30 -- common/autotest_common.sh@10 -- # set +x 00:05:23.247 04:45:30 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:23.247 04:45:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:23.247 04:45:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:23.247 04:45:30 -- common/autotest_common.sh@10 -- # set +x 00:05:23.247 ************************************ 00:05:23.247 START TEST rpc_plugins 00:05:23.247 ************************************ 00:05:23.247 04:45:30 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:05:23.247 04:45:30 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:23.247 04:45:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:23.247 04:45:30 -- common/autotest_common.sh@10 -- # set +x 00:05:23.247 04:45:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:23.247 04:45:30 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:23.247 04:45:30 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:23.247 04:45:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:23.247 04:45:30 -- common/autotest_common.sh@10 -- # set +x 00:05:23.506 04:45:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:23.506 04:45:30 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:23.506 { 00:05:23.506 "name": "Malloc1", 00:05:23.506 "aliases": [ 00:05:23.506 "89ea71a2-514b-4417-9948-f7ed323d23b5" 00:05:23.506 ], 00:05:23.506 "product_name": "Malloc disk", 00:05:23.506 "block_size": 4096, 00:05:23.506 "num_blocks": 256, 00:05:23.506 "uuid": "89ea71a2-514b-4417-9948-f7ed323d23b5", 00:05:23.506 "assigned_rate_limits": { 00:05:23.506 "rw_ios_per_sec": 0, 00:05:23.506 "rw_mbytes_per_sec": 0, 00:05:23.506 "r_mbytes_per_sec": 0, 00:05:23.506 "w_mbytes_per_sec": 0 00:05:23.506 }, 00:05:23.506 "claimed": false, 00:05:23.506 "zoned": false, 00:05:23.506 "supported_io_types": { 00:05:23.506 "read": true, 00:05:23.506 "write": true, 00:05:23.506 "unmap": true, 00:05:23.506 "write_zeroes": true, 00:05:23.506 "flush": true, 00:05:23.506 "reset": true, 00:05:23.506 "compare": false, 00:05:23.506 "compare_and_write": false, 00:05:23.506 "abort": true, 00:05:23.506 "nvme_admin": false, 00:05:23.506 "nvme_io": false 00:05:23.506 }, 00:05:23.506 "memory_domains": [ 00:05:23.506 { 00:05:23.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:23.506 "dma_device_type": 2 00:05:23.506 } 00:05:23.506 ], 00:05:23.506 "driver_specific": {} 00:05:23.506 } 00:05:23.506 ]' 00:05:23.506 04:45:30 -- rpc/rpc.sh@32 -- # jq length 00:05:23.506 04:45:30 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:23.506 04:45:30 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:23.506 04:45:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:23.506 04:45:30 -- common/autotest_common.sh@10 -- # set +x 00:05:23.506 04:45:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:23.506 04:45:30 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:23.506 04:45:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:23.506 04:45:30 -- common/autotest_common.sh@10 -- # set +x 00:05:23.506 04:45:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:23.507 04:45:30 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:23.507 04:45:30 -- rpc/rpc.sh@36 -- # jq length 00:05:23.507 04:45:30 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:23.507 00:05:23.507 real 0m0.163s 00:05:23.507 user 0m0.100s 00:05:23.507 sys 0m0.023s 00:05:23.507 ************************************ 00:05:23.507 END TEST rpc_plugins 00:05:23.507 ************************************ 00:05:23.507 04:45:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.507 04:45:30 -- common/autotest_common.sh@10 -- # set +x 00:05:23.507 04:45:30 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:23.507 04:45:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:23.507 04:45:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:23.507 04:45:30 -- common/autotest_common.sh@10 -- # set +x 00:05:23.507 ************************************ 00:05:23.507 START TEST rpc_trace_cmd_test 00:05:23.507 ************************************ 00:05:23.507 04:45:30 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:05:23.507 04:45:30 -- rpc/rpc.sh@40 -- # local info 00:05:23.507 04:45:30 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:23.507 04:45:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:23.507 04:45:30 -- common/autotest_common.sh@10 -- # set +x 00:05:23.507 04:45:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:23.507 04:45:30 -- rpc/rpc.sh@42 -- # info='{ 00:05:23.507 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56913", 00:05:23.507 "tpoint_group_mask": "0x8", 00:05:23.507 "iscsi_conn": { 00:05:23.507 "mask": "0x2", 00:05:23.507 "tpoint_mask": "0x0" 00:05:23.507 }, 00:05:23.507 "scsi": { 00:05:23.507 "mask": "0x4", 00:05:23.507 "tpoint_mask": "0x0" 00:05:23.507 }, 00:05:23.507 "bdev": { 00:05:23.507 "mask": "0x8", 00:05:23.507 "tpoint_mask": "0xffffffffffffffff" 00:05:23.507 }, 00:05:23.507 "nvmf_rdma": { 00:05:23.507 "mask": "0x10", 00:05:23.507 "tpoint_mask": "0x0" 00:05:23.507 }, 00:05:23.507 "nvmf_tcp": { 00:05:23.507 "mask": "0x20", 00:05:23.507 "tpoint_mask": "0x0" 00:05:23.507 }, 00:05:23.507 "ftl": { 00:05:23.507 "mask": "0x40", 00:05:23.507 "tpoint_mask": "0x0" 00:05:23.507 }, 00:05:23.507 "blobfs": { 00:05:23.507 "mask": "0x80", 00:05:23.507 "tpoint_mask": "0x0" 00:05:23.507 }, 00:05:23.507 "dsa": { 00:05:23.507 "mask": "0x200", 00:05:23.507 "tpoint_mask": "0x0" 00:05:23.507 }, 00:05:23.507 "thread": { 00:05:23.507 "mask": "0x400", 00:05:23.507 "tpoint_mask": "0x0" 00:05:23.507 }, 00:05:23.507 "nvme_pcie": { 00:05:23.507 "mask": "0x800", 00:05:23.507 "tpoint_mask": "0x0" 00:05:23.507 }, 00:05:23.507 "iaa": { 00:05:23.507 "mask": "0x1000", 00:05:23.507 "tpoint_mask": "0x0" 00:05:23.507 }, 00:05:23.507 "nvme_tcp": { 00:05:23.507 "mask": "0x2000", 00:05:23.507 "tpoint_mask": "0x0" 00:05:23.507 }, 00:05:23.507 "bdev_nvme": { 00:05:23.507 "mask": "0x4000", 00:05:23.507 "tpoint_mask": "0x0" 00:05:23.507 } 00:05:23.507 }' 00:05:23.507 04:45:30 -- rpc/rpc.sh@43 -- # jq length 00:05:23.766 04:45:30 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:23.766 04:45:30 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:23.766 04:45:30 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:23.766 04:45:30 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:23.766 04:45:30 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:23.766 04:45:30 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:23.766 04:45:30 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:23.766 04:45:30 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:23.766 04:45:30 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:23.766 00:05:23.766 real 0m0.306s 00:05:23.766 user 0m0.275s 00:05:23.766 sys 0m0.021s 00:05:23.766 ************************************ 00:05:23.766 END TEST rpc_trace_cmd_test 00:05:23.766 ************************************ 00:05:23.766 04:45:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.766 04:45:30 -- common/autotest_common.sh@10 -- # set +x 00:05:24.026 04:45:30 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:24.026 04:45:30 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:24.026 04:45:30 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:24.026 04:45:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:24.026 04:45:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:24.026 04:45:30 -- common/autotest_common.sh@10 -- # set +x 00:05:24.026 ************************************ 00:05:24.026 START TEST rpc_daemon_integrity 00:05:24.026 ************************************ 00:05:24.026 04:45:30 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:24.026 04:45:30 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:24.026 04:45:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.026 04:45:30 -- common/autotest_common.sh@10 -- # set +x 00:05:24.026 04:45:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.026 04:45:30 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:24.026 04:45:30 -- rpc/rpc.sh@13 -- # jq length 00:05:24.026 04:45:30 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:24.026 04:45:30 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:24.026 04:45:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.026 04:45:30 -- common/autotest_common.sh@10 -- # set +x 00:05:24.026 04:45:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.026 04:45:31 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:24.026 04:45:31 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:24.026 04:45:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.026 04:45:31 -- common/autotest_common.sh@10 -- # set +x 00:05:24.026 04:45:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.026 04:45:31 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:24.026 { 00:05:24.026 "name": "Malloc2", 00:05:24.026 "aliases": [ 00:05:24.026 "4f5b6440-4489-4304-baad-1a9f224e0308" 00:05:24.026 ], 00:05:24.026 "product_name": "Malloc disk", 00:05:24.026 "block_size": 512, 00:05:24.026 "num_blocks": 16384, 00:05:24.026 "uuid": "4f5b6440-4489-4304-baad-1a9f224e0308", 00:05:24.026 "assigned_rate_limits": { 00:05:24.026 "rw_ios_per_sec": 0, 00:05:24.026 "rw_mbytes_per_sec": 0, 00:05:24.026 "r_mbytes_per_sec": 0, 00:05:24.026 "w_mbytes_per_sec": 0 00:05:24.026 }, 00:05:24.026 "claimed": false, 00:05:24.026 "zoned": false, 00:05:24.026 "supported_io_types": { 00:05:24.026 "read": true, 00:05:24.026 "write": true, 00:05:24.026 "unmap": true, 00:05:24.026 "write_zeroes": true, 00:05:24.026 "flush": true, 00:05:24.026 "reset": true, 00:05:24.026 "compare": false, 00:05:24.026 "compare_and_write": false, 00:05:24.026 "abort": true, 00:05:24.026 "nvme_admin": false, 00:05:24.026 "nvme_io": false 00:05:24.026 }, 00:05:24.026 "memory_domains": [ 00:05:24.026 { 00:05:24.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:24.026 "dma_device_type": 2 00:05:24.026 } 00:05:24.026 ], 00:05:24.026 "driver_specific": {} 00:05:24.026 } 00:05:24.026 ]' 00:05:24.026 04:45:31 -- rpc/rpc.sh@17 -- # jq length 00:05:24.026 04:45:31 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:24.026 04:45:31 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:24.026 04:45:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.026 04:45:31 -- common/autotest_common.sh@10 -- # set +x 00:05:24.026 [2024-05-12 04:45:31.082892] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:24.026 [2024-05-12 04:45:31.083033] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:24.026 [2024-05-12 04:45:31.083064] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:05:24.026 [2024-05-12 04:45:31.083087] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:24.026 [2024-05-12 04:45:31.085971] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:24.026 [2024-05-12 04:45:31.086049] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:24.026 Passthru0 00:05:24.026 04:45:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.026 04:45:31 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:24.026 04:45:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.026 04:45:31 -- common/autotest_common.sh@10 -- # set +x 00:05:24.026 04:45:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.026 04:45:31 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:24.026 { 00:05:24.026 "name": "Malloc2", 00:05:24.026 "aliases": [ 00:05:24.026 "4f5b6440-4489-4304-baad-1a9f224e0308" 00:05:24.026 ], 00:05:24.026 "product_name": "Malloc disk", 00:05:24.026 "block_size": 512, 00:05:24.026 "num_blocks": 16384, 00:05:24.026 "uuid": "4f5b6440-4489-4304-baad-1a9f224e0308", 00:05:24.026 "assigned_rate_limits": { 00:05:24.026 "rw_ios_per_sec": 0, 00:05:24.026 "rw_mbytes_per_sec": 0, 00:05:24.026 "r_mbytes_per_sec": 0, 00:05:24.026 "w_mbytes_per_sec": 0 00:05:24.026 }, 00:05:24.026 "claimed": true, 00:05:24.026 "claim_type": "exclusive_write", 00:05:24.026 "zoned": false, 00:05:24.026 "supported_io_types": { 00:05:24.026 "read": true, 00:05:24.026 "write": true, 00:05:24.026 "unmap": true, 00:05:24.026 "write_zeroes": true, 00:05:24.026 "flush": true, 00:05:24.026 "reset": true, 00:05:24.026 "compare": false, 00:05:24.026 "compare_and_write": false, 00:05:24.026 "abort": true, 00:05:24.026 "nvme_admin": false, 00:05:24.026 "nvme_io": false 00:05:24.026 }, 00:05:24.026 "memory_domains": [ 00:05:24.026 { 00:05:24.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:24.026 "dma_device_type": 2 00:05:24.026 } 00:05:24.026 ], 00:05:24.026 "driver_specific": {} 00:05:24.026 }, 00:05:24.026 { 00:05:24.026 "name": "Passthru0", 00:05:24.026 "aliases": [ 00:05:24.026 "6559f916-df4b-51a3-ac09-aa9bd33f52bd" 00:05:24.026 ], 00:05:24.026 "product_name": "passthru", 00:05:24.026 "block_size": 512, 00:05:24.026 "num_blocks": 16384, 00:05:24.026 "uuid": "6559f916-df4b-51a3-ac09-aa9bd33f52bd", 00:05:24.026 "assigned_rate_limits": { 00:05:24.026 "rw_ios_per_sec": 0, 00:05:24.026 "rw_mbytes_per_sec": 0, 00:05:24.026 "r_mbytes_per_sec": 0, 00:05:24.026 "w_mbytes_per_sec": 0 00:05:24.026 }, 00:05:24.026 "claimed": false, 00:05:24.026 "zoned": false, 00:05:24.026 "supported_io_types": { 00:05:24.026 "read": true, 00:05:24.026 "write": true, 00:05:24.026 "unmap": true, 00:05:24.026 "write_zeroes": true, 00:05:24.026 "flush": true, 00:05:24.026 "reset": true, 00:05:24.026 "compare": false, 00:05:24.026 "compare_and_write": false, 00:05:24.026 "abort": true, 00:05:24.026 "nvme_admin": false, 00:05:24.026 "nvme_io": false 00:05:24.026 }, 00:05:24.026 "memory_domains": [ 00:05:24.026 { 00:05:24.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:24.026 "dma_device_type": 2 00:05:24.026 } 00:05:24.026 ], 00:05:24.026 "driver_specific": { 00:05:24.026 "passthru": { 00:05:24.026 "name": "Passthru0", 00:05:24.026 "base_bdev_name": "Malloc2" 00:05:24.026 } 00:05:24.026 } 00:05:24.026 } 00:05:24.026 ]' 00:05:24.026 04:45:31 -- rpc/rpc.sh@21 -- # jq length 00:05:24.286 04:45:31 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:24.286 04:45:31 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:24.286 04:45:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.286 04:45:31 -- common/autotest_common.sh@10 -- # set +x 00:05:24.286 04:45:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.286 04:45:31 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:24.286 04:45:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.286 04:45:31 -- common/autotest_common.sh@10 -- # set +x 00:05:24.286 04:45:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.286 04:45:31 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:24.286 04:45:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.286 04:45:31 -- common/autotest_common.sh@10 -- # set +x 00:05:24.286 04:45:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.286 04:45:31 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:24.286 04:45:31 -- rpc/rpc.sh@26 -- # jq length 00:05:24.286 04:45:31 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:24.286 00:05:24.286 real 0m0.346s 00:05:24.286 user 0m0.220s 00:05:24.286 sys 0m0.037s 00:05:24.286 ************************************ 00:05:24.286 END TEST rpc_daemon_integrity 00:05:24.286 ************************************ 00:05:24.286 04:45:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.286 04:45:31 -- common/autotest_common.sh@10 -- # set +x 00:05:24.286 04:45:31 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:24.286 04:45:31 -- rpc/rpc.sh@84 -- # killprocess 56913 00:05:24.286 04:45:31 -- common/autotest_common.sh@926 -- # '[' -z 56913 ']' 00:05:24.286 04:45:31 -- common/autotest_common.sh@930 -- # kill -0 56913 00:05:24.286 04:45:31 -- common/autotest_common.sh@931 -- # uname 00:05:24.286 04:45:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:24.286 04:45:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 56913 00:05:24.286 04:45:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:24.286 killing process with pid 56913 00:05:24.286 04:45:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:24.286 04:45:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 56913' 00:05:24.286 04:45:31 -- common/autotest_common.sh@945 -- # kill 56913 00:05:24.286 04:45:31 -- common/autotest_common.sh@950 -- # wait 56913 00:05:26.192 00:05:26.192 real 0m5.004s 00:05:26.192 user 0m6.061s 00:05:26.192 sys 0m0.720s 00:05:26.192 04:45:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.192 ************************************ 00:05:26.192 END TEST rpc 00:05:26.192 ************************************ 00:05:26.192 04:45:33 -- common/autotest_common.sh@10 -- # set +x 00:05:26.192 04:45:33 -- spdk/autotest.sh@177 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:26.192 04:45:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:26.192 04:45:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:26.192 04:45:33 -- common/autotest_common.sh@10 -- # set +x 00:05:26.192 ************************************ 00:05:26.192 START TEST rpc_client 00:05:26.192 ************************************ 00:05:26.192 04:45:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:26.192 * Looking for test storage... 00:05:26.192 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:26.192 04:45:33 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:26.192 OK 00:05:26.192 04:45:33 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:26.192 00:05:26.192 real 0m0.137s 00:05:26.192 user 0m0.060s 00:05:26.192 sys 0m0.083s 00:05:26.192 04:45:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.192 ************************************ 00:05:26.192 END TEST rpc_client 00:05:26.192 ************************************ 00:05:26.192 04:45:33 -- common/autotest_common.sh@10 -- # set +x 00:05:26.192 04:45:33 -- spdk/autotest.sh@178 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:26.192 04:45:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:26.192 04:45:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:26.192 04:45:33 -- common/autotest_common.sh@10 -- # set +x 00:05:26.450 ************************************ 00:05:26.450 START TEST json_config 00:05:26.450 ************************************ 00:05:26.451 04:45:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:26.451 04:45:33 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:26.451 04:45:33 -- nvmf/common.sh@7 -- # uname -s 00:05:26.451 04:45:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:26.451 04:45:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:26.451 04:45:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:26.451 04:45:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:26.451 04:45:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:26.451 04:45:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:26.451 04:45:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:26.451 04:45:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:26.451 04:45:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:26.451 04:45:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:26.451 04:45:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d2d63dd8-27f0-4005-943a-9616d5238cfe 00:05:26.451 04:45:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=d2d63dd8-27f0-4005-943a-9616d5238cfe 00:05:26.451 04:45:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:26.451 04:45:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:26.451 04:45:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:26.451 04:45:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:26.451 04:45:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:26.451 04:45:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:26.451 04:45:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:26.451 04:45:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.451 04:45:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.451 04:45:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.451 04:45:33 -- paths/export.sh@5 -- # export PATH 00:05:26.451 04:45:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.451 04:45:33 -- nvmf/common.sh@46 -- # : 0 00:05:26.451 04:45:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:26.451 04:45:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:26.451 04:45:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:26.451 04:45:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:26.451 04:45:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:26.451 04:45:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:26.451 04:45:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:26.451 04:45:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:26.451 04:45:33 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:26.451 04:45:33 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:26.451 04:45:33 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:26.451 04:45:33 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:26.451 WARNING: No tests are enabled so not running JSON configuration tests 00:05:26.451 04:45:33 -- json_config/json_config.sh@26 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:26.451 04:45:33 -- json_config/json_config.sh@27 -- # exit 0 00:05:26.451 00:05:26.451 real 0m0.073s 00:05:26.451 user 0m0.038s 00:05:26.451 sys 0m0.036s 00:05:26.451 04:45:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.451 04:45:33 -- common/autotest_common.sh@10 -- # set +x 00:05:26.451 ************************************ 00:05:26.451 END TEST json_config 00:05:26.451 ************************************ 00:05:26.451 04:45:33 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:26.451 04:45:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:26.451 04:45:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:26.451 04:45:33 -- common/autotest_common.sh@10 -- # set +x 00:05:26.451 ************************************ 00:05:26.451 START TEST json_config_extra_key 00:05:26.451 ************************************ 00:05:26.451 04:45:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:26.451 04:45:33 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:26.451 04:45:33 -- nvmf/common.sh@7 -- # uname -s 00:05:26.451 04:45:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:26.451 04:45:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:26.451 04:45:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:26.451 04:45:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:26.451 04:45:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:26.451 04:45:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:26.451 04:45:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:26.451 04:45:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:26.451 04:45:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:26.451 04:45:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:26.451 04:45:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d2d63dd8-27f0-4005-943a-9616d5238cfe 00:05:26.451 04:45:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=d2d63dd8-27f0-4005-943a-9616d5238cfe 00:05:26.451 04:45:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:26.451 04:45:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:26.451 04:45:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:26.451 04:45:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:26.451 04:45:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:26.451 04:45:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:26.451 04:45:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:26.451 04:45:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.451 04:45:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.451 04:45:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.451 04:45:33 -- paths/export.sh@5 -- # export PATH 00:05:26.451 04:45:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.451 04:45:33 -- nvmf/common.sh@46 -- # : 0 00:05:26.451 04:45:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:26.451 04:45:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:26.451 04:45:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:26.451 04:45:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:26.451 04:45:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:26.451 04:45:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:26.451 04:45:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:26.451 04:45:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:26.451 04:45:33 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:26.451 04:45:33 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:26.451 04:45:33 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:26.451 04:45:33 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:26.451 04:45:33 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:26.451 04:45:33 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:26.451 04:45:33 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:26.451 04:45:33 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:26.451 04:45:33 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:26.451 INFO: launching applications... 00:05:26.451 04:45:33 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:26.451 04:45:33 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:26.451 04:45:33 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:26.451 04:45:33 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:26.451 04:45:33 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:26.451 04:45:33 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:26.451 04:45:33 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=57207 00:05:26.451 Waiting for target to run... 00:05:26.451 04:45:33 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:26.451 04:45:33 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 57207 /var/tmp/spdk_tgt.sock 00:05:26.451 04:45:33 -- common/autotest_common.sh@819 -- # '[' -z 57207 ']' 00:05:26.451 04:45:33 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:26.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:26.452 04:45:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:26.452 04:45:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:26.452 04:45:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:26.452 04:45:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:26.452 04:45:33 -- common/autotest_common.sh@10 -- # set +x 00:05:26.710 [2024-05-12 04:45:33.627917] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:26.710 [2024-05-12 04:45:33.628088] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57207 ] 00:05:26.969 [2024-05-12 04:45:33.970447] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.228 [2024-05-12 04:45:34.125013] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:27.228 [2024-05-12 04:45:34.125335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.166 04:45:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:28.166 00:05:28.166 04:45:35 -- common/autotest_common.sh@852 -- # return 0 00:05:28.166 04:45:35 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:28.166 INFO: shutting down applications... 00:05:28.166 04:45:35 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:28.166 04:45:35 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:28.166 04:45:35 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:28.166 04:45:35 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:28.166 04:45:35 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 57207 ]] 00:05:28.166 04:45:35 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 57207 00:05:28.166 04:45:35 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:28.166 04:45:35 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:28.166 04:45:35 -- json_config/json_config_extra_key.sh@50 -- # kill -0 57207 00:05:28.166 04:45:35 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:28.733 04:45:35 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:28.733 04:45:35 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:28.733 04:45:35 -- json_config/json_config_extra_key.sh@50 -- # kill -0 57207 00:05:28.733 04:45:35 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:29.302 04:45:36 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:29.302 04:45:36 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:29.302 04:45:36 -- json_config/json_config_extra_key.sh@50 -- # kill -0 57207 00:05:29.302 04:45:36 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:29.911 04:45:36 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:29.911 04:45:36 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:29.911 04:45:36 -- json_config/json_config_extra_key.sh@50 -- # kill -0 57207 00:05:29.911 04:45:36 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:30.174 04:45:37 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:30.174 04:45:37 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:30.174 04:45:37 -- json_config/json_config_extra_key.sh@50 -- # kill -0 57207 00:05:30.174 04:45:37 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:30.752 04:45:37 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:30.752 04:45:37 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:30.752 04:45:37 -- json_config/json_config_extra_key.sh@50 -- # kill -0 57207 00:05:30.752 04:45:37 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:30.752 04:45:37 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:30.752 SPDK target shutdown done 00:05:30.752 04:45:37 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:30.752 04:45:37 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:30.752 Success 00:05:30.752 04:45:37 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:30.752 00:05:30.752 real 0m4.350s 00:05:30.752 user 0m4.098s 00:05:30.752 sys 0m0.470s 00:05:30.752 04:45:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.752 ************************************ 00:05:30.752 END TEST json_config_extra_key 00:05:30.752 ************************************ 00:05:30.752 04:45:37 -- common/autotest_common.sh@10 -- # set +x 00:05:30.752 04:45:37 -- spdk/autotest.sh@180 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:30.752 04:45:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:30.752 04:45:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:30.752 04:45:37 -- common/autotest_common.sh@10 -- # set +x 00:05:30.752 ************************************ 00:05:30.752 START TEST alias_rpc 00:05:30.752 ************************************ 00:05:30.752 04:45:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:31.011 * Looking for test storage... 00:05:31.011 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:31.011 04:45:37 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:31.011 04:45:37 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57311 00:05:31.011 04:45:37 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57311 00:05:31.011 04:45:37 -- common/autotest_common.sh@819 -- # '[' -z 57311 ']' 00:05:31.011 04:45:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.011 04:45:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:31.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.011 04:45:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.011 04:45:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:31.011 04:45:37 -- common/autotest_common.sh@10 -- # set +x 00:05:31.011 04:45:37 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:31.011 [2024-05-12 04:45:38.042495] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:31.011 [2024-05-12 04:45:38.042667] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57311 ] 00:05:31.270 [2024-05-12 04:45:38.213611] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.270 [2024-05-12 04:45:38.387909] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:31.270 [2024-05-12 04:45:38.388164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.645 04:45:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:32.645 04:45:39 -- common/autotest_common.sh@852 -- # return 0 00:05:32.645 04:45:39 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:32.904 04:45:39 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57311 00:05:32.904 04:45:39 -- common/autotest_common.sh@926 -- # '[' -z 57311 ']' 00:05:32.904 04:45:39 -- common/autotest_common.sh@930 -- # kill -0 57311 00:05:32.904 04:45:39 -- common/autotest_common.sh@931 -- # uname 00:05:32.904 04:45:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:32.904 04:45:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57311 00:05:32.904 04:45:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:32.904 04:45:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:32.904 killing process with pid 57311 00:05:32.904 04:45:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57311' 00:05:32.904 04:45:39 -- common/autotest_common.sh@945 -- # kill 57311 00:05:32.904 04:45:39 -- common/autotest_common.sh@950 -- # wait 57311 00:05:34.809 00:05:34.809 real 0m3.844s 00:05:34.809 user 0m4.245s 00:05:34.809 sys 0m0.457s 00:05:34.809 04:45:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.809 04:45:41 -- common/autotest_common.sh@10 -- # set +x 00:05:34.809 ************************************ 00:05:34.809 END TEST alias_rpc 00:05:34.809 ************************************ 00:05:34.809 04:45:41 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:05:34.809 04:45:41 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:34.809 04:45:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:34.809 04:45:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:34.809 04:45:41 -- common/autotest_common.sh@10 -- # set +x 00:05:34.809 ************************************ 00:05:34.809 START TEST spdkcli_tcp 00:05:34.809 ************************************ 00:05:34.809 04:45:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:34.809 * Looking for test storage... 00:05:34.809 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:34.809 04:45:41 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:34.809 04:45:41 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:34.809 04:45:41 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:34.809 04:45:41 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:34.809 04:45:41 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:34.809 04:45:41 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:34.809 04:45:41 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:34.809 04:45:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:34.809 04:45:41 -- common/autotest_common.sh@10 -- # set +x 00:05:34.809 04:45:41 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57405 00:05:34.809 04:45:41 -- spdkcli/tcp.sh@27 -- # waitforlisten 57405 00:05:34.809 04:45:41 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:34.809 04:45:41 -- common/autotest_common.sh@819 -- # '[' -z 57405 ']' 00:05:34.809 04:45:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.809 04:45:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:34.809 04:45:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.809 04:45:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:34.809 04:45:41 -- common/autotest_common.sh@10 -- # set +x 00:05:35.067 [2024-05-12 04:45:41.941796] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:35.067 [2024-05-12 04:45:41.941955] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57405 ] 00:05:35.067 [2024-05-12 04:45:42.112610] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:35.325 [2024-05-12 04:45:42.289658] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:35.325 [2024-05-12 04:45:42.290053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.325 [2024-05-12 04:45:42.290154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.701 04:45:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:36.701 04:45:43 -- common/autotest_common.sh@852 -- # return 0 00:05:36.701 04:45:43 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:36.701 04:45:43 -- spdkcli/tcp.sh@31 -- # socat_pid=57430 00:05:36.701 04:45:43 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:36.701 [ 00:05:36.701 "bdev_malloc_delete", 00:05:36.701 "bdev_malloc_create", 00:05:36.701 "bdev_null_resize", 00:05:36.701 "bdev_null_delete", 00:05:36.701 "bdev_null_create", 00:05:36.701 "bdev_nvme_cuse_unregister", 00:05:36.701 "bdev_nvme_cuse_register", 00:05:36.701 "bdev_opal_new_user", 00:05:36.701 "bdev_opal_set_lock_state", 00:05:36.701 "bdev_opal_delete", 00:05:36.701 "bdev_opal_get_info", 00:05:36.701 "bdev_opal_create", 00:05:36.701 "bdev_nvme_opal_revert", 00:05:36.701 "bdev_nvme_opal_init", 00:05:36.701 "bdev_nvme_send_cmd", 00:05:36.701 "bdev_nvme_get_path_iostat", 00:05:36.701 "bdev_nvme_get_mdns_discovery_info", 00:05:36.701 "bdev_nvme_stop_mdns_discovery", 00:05:36.701 "bdev_nvme_start_mdns_discovery", 00:05:36.701 "bdev_nvme_set_multipath_policy", 00:05:36.701 "bdev_nvme_set_preferred_path", 00:05:36.701 "bdev_nvme_get_io_paths", 00:05:36.701 "bdev_nvme_remove_error_injection", 00:05:36.701 "bdev_nvme_add_error_injection", 00:05:36.701 "bdev_nvme_get_discovery_info", 00:05:36.701 "bdev_nvme_stop_discovery", 00:05:36.701 "bdev_nvme_start_discovery", 00:05:36.701 "bdev_nvme_get_controller_health_info", 00:05:36.701 "bdev_nvme_disable_controller", 00:05:36.701 "bdev_nvme_enable_controller", 00:05:36.701 "bdev_nvme_reset_controller", 00:05:36.701 "bdev_nvme_get_transport_statistics", 00:05:36.701 "bdev_nvme_apply_firmware", 00:05:36.701 "bdev_nvme_detach_controller", 00:05:36.701 "bdev_nvme_get_controllers", 00:05:36.701 "bdev_nvme_attach_controller", 00:05:36.701 "bdev_nvme_set_hotplug", 00:05:36.701 "bdev_nvme_set_options", 00:05:36.701 "bdev_passthru_delete", 00:05:36.701 "bdev_passthru_create", 00:05:36.701 "bdev_lvol_grow_lvstore", 00:05:36.701 "bdev_lvol_get_lvols", 00:05:36.701 "bdev_lvol_get_lvstores", 00:05:36.701 "bdev_lvol_delete", 00:05:36.701 "bdev_lvol_set_read_only", 00:05:36.701 "bdev_lvol_resize", 00:05:36.701 "bdev_lvol_decouple_parent", 00:05:36.701 "bdev_lvol_inflate", 00:05:36.701 "bdev_lvol_rename", 00:05:36.701 "bdev_lvol_clone_bdev", 00:05:36.701 "bdev_lvol_clone", 00:05:36.701 "bdev_lvol_snapshot", 00:05:36.701 "bdev_lvol_create", 00:05:36.701 "bdev_lvol_delete_lvstore", 00:05:36.701 "bdev_lvol_rename_lvstore", 00:05:36.701 "bdev_lvol_create_lvstore", 00:05:36.701 "bdev_raid_set_options", 00:05:36.701 "bdev_raid_remove_base_bdev", 00:05:36.701 "bdev_raid_add_base_bdev", 00:05:36.701 "bdev_raid_delete", 00:05:36.701 "bdev_raid_create", 00:05:36.701 "bdev_raid_get_bdevs", 00:05:36.701 "bdev_error_inject_error", 00:05:36.701 "bdev_error_delete", 00:05:36.701 "bdev_error_create", 00:05:36.701 "bdev_split_delete", 00:05:36.701 "bdev_split_create", 00:05:36.701 "bdev_delay_delete", 00:05:36.701 "bdev_delay_create", 00:05:36.701 "bdev_delay_update_latency", 00:05:36.701 "bdev_zone_block_delete", 00:05:36.701 "bdev_zone_block_create", 00:05:36.701 "blobfs_create", 00:05:36.701 "blobfs_detect", 00:05:36.701 "blobfs_set_cache_size", 00:05:36.701 "bdev_xnvme_delete", 00:05:36.701 "bdev_xnvme_create", 00:05:36.701 "bdev_aio_delete", 00:05:36.701 "bdev_aio_rescan", 00:05:36.701 "bdev_aio_create", 00:05:36.701 "bdev_ftl_set_property", 00:05:36.701 "bdev_ftl_get_properties", 00:05:36.701 "bdev_ftl_get_stats", 00:05:36.701 "bdev_ftl_unmap", 00:05:36.701 "bdev_ftl_unload", 00:05:36.701 "bdev_ftl_delete", 00:05:36.701 "bdev_ftl_load", 00:05:36.701 "bdev_ftl_create", 00:05:36.701 "bdev_virtio_attach_controller", 00:05:36.701 "bdev_virtio_scsi_get_devices", 00:05:36.701 "bdev_virtio_detach_controller", 00:05:36.701 "bdev_virtio_blk_set_hotplug", 00:05:36.701 "bdev_iscsi_delete", 00:05:36.701 "bdev_iscsi_create", 00:05:36.701 "bdev_iscsi_set_options", 00:05:36.701 "accel_error_inject_error", 00:05:36.701 "ioat_scan_accel_module", 00:05:36.701 "dsa_scan_accel_module", 00:05:36.701 "iaa_scan_accel_module", 00:05:36.701 "iscsi_set_options", 00:05:36.701 "iscsi_get_auth_groups", 00:05:36.701 "iscsi_auth_group_remove_secret", 00:05:36.701 "iscsi_auth_group_add_secret", 00:05:36.701 "iscsi_delete_auth_group", 00:05:36.701 "iscsi_create_auth_group", 00:05:36.701 "iscsi_set_discovery_auth", 00:05:36.701 "iscsi_get_options", 00:05:36.701 "iscsi_target_node_request_logout", 00:05:36.701 "iscsi_target_node_set_redirect", 00:05:36.701 "iscsi_target_node_set_auth", 00:05:36.701 "iscsi_target_node_add_lun", 00:05:36.701 "iscsi_get_connections", 00:05:36.701 "iscsi_portal_group_set_auth", 00:05:36.701 "iscsi_start_portal_group", 00:05:36.701 "iscsi_delete_portal_group", 00:05:36.701 "iscsi_create_portal_group", 00:05:36.701 "iscsi_get_portal_groups", 00:05:36.701 "iscsi_delete_target_node", 00:05:36.701 "iscsi_target_node_remove_pg_ig_maps", 00:05:36.701 "iscsi_target_node_add_pg_ig_maps", 00:05:36.701 "iscsi_create_target_node", 00:05:36.701 "iscsi_get_target_nodes", 00:05:36.701 "iscsi_delete_initiator_group", 00:05:36.701 "iscsi_initiator_group_remove_initiators", 00:05:36.701 "iscsi_initiator_group_add_initiators", 00:05:36.701 "iscsi_create_initiator_group", 00:05:36.701 "iscsi_get_initiator_groups", 00:05:36.701 "nvmf_set_crdt", 00:05:36.701 "nvmf_set_config", 00:05:36.701 "nvmf_set_max_subsystems", 00:05:36.701 "nvmf_subsystem_get_listeners", 00:05:36.701 "nvmf_subsystem_get_qpairs", 00:05:36.701 "nvmf_subsystem_get_controllers", 00:05:36.701 "nvmf_get_stats", 00:05:36.701 "nvmf_get_transports", 00:05:36.701 "nvmf_create_transport", 00:05:36.701 "nvmf_get_targets", 00:05:36.701 "nvmf_delete_target", 00:05:36.701 "nvmf_create_target", 00:05:36.701 "nvmf_subsystem_allow_any_host", 00:05:36.701 "nvmf_subsystem_remove_host", 00:05:36.701 "nvmf_subsystem_add_host", 00:05:36.701 "nvmf_subsystem_remove_ns", 00:05:36.701 "nvmf_subsystem_add_ns", 00:05:36.701 "nvmf_subsystem_listener_set_ana_state", 00:05:36.701 "nvmf_discovery_get_referrals", 00:05:36.701 "nvmf_discovery_remove_referral", 00:05:36.701 "nvmf_discovery_add_referral", 00:05:36.701 "nvmf_subsystem_remove_listener", 00:05:36.701 "nvmf_subsystem_add_listener", 00:05:36.701 "nvmf_delete_subsystem", 00:05:36.701 "nvmf_create_subsystem", 00:05:36.701 "nvmf_get_subsystems", 00:05:36.701 "env_dpdk_get_mem_stats", 00:05:36.701 "nbd_get_disks", 00:05:36.701 "nbd_stop_disk", 00:05:36.701 "nbd_start_disk", 00:05:36.701 "ublk_recover_disk", 00:05:36.701 "ublk_get_disks", 00:05:36.701 "ublk_stop_disk", 00:05:36.701 "ublk_start_disk", 00:05:36.701 "ublk_destroy_target", 00:05:36.701 "ublk_create_target", 00:05:36.701 "virtio_blk_create_transport", 00:05:36.701 "virtio_blk_get_transports", 00:05:36.701 "vhost_controller_set_coalescing", 00:05:36.701 "vhost_get_controllers", 00:05:36.701 "vhost_delete_controller", 00:05:36.701 "vhost_create_blk_controller", 00:05:36.701 "vhost_scsi_controller_remove_target", 00:05:36.701 "vhost_scsi_controller_add_target", 00:05:36.701 "vhost_start_scsi_controller", 00:05:36.701 "vhost_create_scsi_controller", 00:05:36.701 "thread_set_cpumask", 00:05:36.701 "framework_get_scheduler", 00:05:36.701 "framework_set_scheduler", 00:05:36.701 "framework_get_reactors", 00:05:36.701 "thread_get_io_channels", 00:05:36.701 "thread_get_pollers", 00:05:36.701 "thread_get_stats", 00:05:36.701 "framework_monitor_context_switch", 00:05:36.701 "spdk_kill_instance", 00:05:36.701 "log_enable_timestamps", 00:05:36.701 "log_get_flags", 00:05:36.701 "log_clear_flag", 00:05:36.701 "log_set_flag", 00:05:36.701 "log_get_level", 00:05:36.701 "log_set_level", 00:05:36.701 "log_get_print_level", 00:05:36.701 "log_set_print_level", 00:05:36.701 "framework_enable_cpumask_locks", 00:05:36.701 "framework_disable_cpumask_locks", 00:05:36.701 "framework_wait_init", 00:05:36.701 "framework_start_init", 00:05:36.701 "scsi_get_devices", 00:05:36.701 "bdev_get_histogram", 00:05:36.701 "bdev_enable_histogram", 00:05:36.701 "bdev_set_qos_limit", 00:05:36.701 "bdev_set_qd_sampling_period", 00:05:36.701 "bdev_get_bdevs", 00:05:36.701 "bdev_reset_iostat", 00:05:36.701 "bdev_get_iostat", 00:05:36.701 "bdev_examine", 00:05:36.701 "bdev_wait_for_examine", 00:05:36.701 "bdev_set_options", 00:05:36.701 "notify_get_notifications", 00:05:36.701 "notify_get_types", 00:05:36.701 "accel_get_stats", 00:05:36.701 "accel_set_options", 00:05:36.701 "accel_set_driver", 00:05:36.701 "accel_crypto_key_destroy", 00:05:36.701 "accel_crypto_keys_get", 00:05:36.701 "accel_crypto_key_create", 00:05:36.701 "accel_assign_opc", 00:05:36.701 "accel_get_module_info", 00:05:36.701 "accel_get_opc_assignments", 00:05:36.701 "vmd_rescan", 00:05:36.701 "vmd_remove_device", 00:05:36.701 "vmd_enable", 00:05:36.701 "sock_set_default_impl", 00:05:36.701 "sock_impl_set_options", 00:05:36.701 "sock_impl_get_options", 00:05:36.701 "iobuf_get_stats", 00:05:36.701 "iobuf_set_options", 00:05:36.701 "framework_get_pci_devices", 00:05:36.701 "framework_get_config", 00:05:36.701 "framework_get_subsystems", 00:05:36.701 "trace_get_info", 00:05:36.701 "trace_get_tpoint_group_mask", 00:05:36.701 "trace_disable_tpoint_group", 00:05:36.701 "trace_enable_tpoint_group", 00:05:36.701 "trace_clear_tpoint_mask", 00:05:36.701 "trace_set_tpoint_mask", 00:05:36.701 "spdk_get_version", 00:05:36.701 "rpc_get_methods" 00:05:36.701 ] 00:05:36.701 04:45:43 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:36.701 04:45:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:36.701 04:45:43 -- common/autotest_common.sh@10 -- # set +x 00:05:36.961 04:45:43 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:36.961 04:45:43 -- spdkcli/tcp.sh@38 -- # killprocess 57405 00:05:36.961 04:45:43 -- common/autotest_common.sh@926 -- # '[' -z 57405 ']' 00:05:36.961 04:45:43 -- common/autotest_common.sh@930 -- # kill -0 57405 00:05:36.961 04:45:43 -- common/autotest_common.sh@931 -- # uname 00:05:36.961 04:45:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:36.961 04:45:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57405 00:05:36.961 04:45:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:36.961 04:45:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:36.961 04:45:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57405' 00:05:36.961 killing process with pid 57405 00:05:36.961 04:45:43 -- common/autotest_common.sh@945 -- # kill 57405 00:05:36.961 04:45:43 -- common/autotest_common.sh@950 -- # wait 57405 00:05:38.866 00:05:38.866 real 0m3.923s 00:05:38.866 user 0m7.353s 00:05:38.866 sys 0m0.502s 00:05:38.866 04:45:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.866 04:45:45 -- common/autotest_common.sh@10 -- # set +x 00:05:38.866 ************************************ 00:05:38.866 END TEST spdkcli_tcp 00:05:38.866 ************************************ 00:05:38.867 04:45:45 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:38.867 04:45:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:38.867 04:45:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:38.867 04:45:45 -- common/autotest_common.sh@10 -- # set +x 00:05:38.867 ************************************ 00:05:38.867 START TEST dpdk_mem_utility 00:05:38.867 ************************************ 00:05:38.867 04:45:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:38.867 * Looking for test storage... 00:05:38.867 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:38.867 04:45:45 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:38.867 04:45:45 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57515 00:05:38.867 04:45:45 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57515 00:05:38.867 04:45:45 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:38.867 04:45:45 -- common/autotest_common.sh@819 -- # '[' -z 57515 ']' 00:05:38.867 04:45:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.867 04:45:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:38.867 04:45:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.867 04:45:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:38.867 04:45:45 -- common/autotest_common.sh@10 -- # set +x 00:05:38.867 [2024-05-12 04:45:45.905771] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:38.867 [2024-05-12 04:45:45.905989] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57515 ] 00:05:39.126 [2024-05-12 04:45:46.078970] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.126 [2024-05-12 04:45:46.222856] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:39.126 [2024-05-12 04:45:46.223088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.505 04:45:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:40.505 04:45:47 -- common/autotest_common.sh@852 -- # return 0 00:05:40.505 04:45:47 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:40.505 04:45:47 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:40.505 04:45:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:40.505 04:45:47 -- common/autotest_common.sh@10 -- # set +x 00:05:40.505 { 00:05:40.505 "filename": "/tmp/spdk_mem_dump.txt" 00:05:40.505 } 00:05:40.505 04:45:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:40.505 04:45:47 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:40.505 DPDK memory size 820.000000 MiB in 1 heap(s) 00:05:40.505 1 heaps totaling size 820.000000 MiB 00:05:40.505 size: 820.000000 MiB heap id: 0 00:05:40.505 end heaps---------- 00:05:40.505 8 mempools totaling size 598.116089 MiB 00:05:40.505 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:40.505 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:40.505 size: 84.521057 MiB name: bdev_io_57515 00:05:40.505 size: 51.011292 MiB name: evtpool_57515 00:05:40.505 size: 50.003479 MiB name: msgpool_57515 00:05:40.505 size: 21.763794 MiB name: PDU_Pool 00:05:40.505 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:40.505 size: 0.026123 MiB name: Session_Pool 00:05:40.505 end mempools------- 00:05:40.505 6 memzones totaling size 4.142822 MiB 00:05:40.505 size: 1.000366 MiB name: RG_ring_0_57515 00:05:40.505 size: 1.000366 MiB name: RG_ring_1_57515 00:05:40.505 size: 1.000366 MiB name: RG_ring_4_57515 00:05:40.505 size: 1.000366 MiB name: RG_ring_5_57515 00:05:40.505 size: 0.125366 MiB name: RG_ring_2_57515 00:05:40.505 size: 0.015991 MiB name: RG_ring_3_57515 00:05:40.505 end memzones------- 00:05:40.505 04:45:47 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:40.767 heap id: 0 total size: 820.000000 MiB number of busy elements: 301 number of free elements: 18 00:05:40.767 list of free elements. size: 18.451294 MiB 00:05:40.767 element at address: 0x200000400000 with size: 1.999451 MiB 00:05:40.767 element at address: 0x200000800000 with size: 1.996887 MiB 00:05:40.767 element at address: 0x200007000000 with size: 1.995972 MiB 00:05:40.767 element at address: 0x20000b200000 with size: 1.995972 MiB 00:05:40.767 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:40.767 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:40.767 element at address: 0x200019600000 with size: 0.999084 MiB 00:05:40.767 element at address: 0x200003e00000 with size: 0.996094 MiB 00:05:40.767 element at address: 0x200032200000 with size: 0.994324 MiB 00:05:40.767 element at address: 0x200018e00000 with size: 0.959656 MiB 00:05:40.767 element at address: 0x200019900040 with size: 0.936401 MiB 00:05:40.767 element at address: 0x200000200000 with size: 0.829224 MiB 00:05:40.767 element at address: 0x20001b000000 with size: 0.564880 MiB 00:05:40.767 element at address: 0x200019200000 with size: 0.487976 MiB 00:05:40.767 element at address: 0x200019a00000 with size: 0.485413 MiB 00:05:40.767 element at address: 0x200013800000 with size: 0.467651 MiB 00:05:40.767 element at address: 0x200028400000 with size: 0.390442 MiB 00:05:40.767 element at address: 0x200003a00000 with size: 0.351990 MiB 00:05:40.767 list of standard malloc elements. size: 199.284302 MiB 00:05:40.767 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:05:40.767 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:05:40.767 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:40.767 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:40.767 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:40.767 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:40.767 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:05:40.767 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:40.767 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:05:40.767 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:05:40.767 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:05:40.767 element at address: 0x2000002d4480 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000002d4580 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000002d4680 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000002d4780 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:40.767 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:05:40.767 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:05:40.767 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:05:40.767 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:05:40.767 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:05:40.767 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:05:40.767 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:05:40.767 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:05:40.767 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:05:40.767 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:05:40.767 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:05:40.767 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:05:40.767 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:05:40.767 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:05:40.767 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:05:40.767 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:05:40.767 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:05:40.767 element at address: 0x200003aff980 with size: 0.000244 MiB 00:05:40.767 element at address: 0x200003affa80 with size: 0.000244 MiB 00:05:40.767 element at address: 0x200003eff000 with size: 0.000244 MiB 00:05:40.767 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:05:40.767 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:05:40.767 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:05:40.767 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:05:40.767 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:05:40.767 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:05:40.767 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:05:40.767 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:05:40.767 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:05:40.767 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:05:40.767 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:05:40.767 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:05:40.767 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:05:40.767 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:05:40.767 element at address: 0x200013877b80 with size: 0.000244 MiB 00:05:40.767 element at address: 0x200013877c80 with size: 0.000244 MiB 00:05:40.767 element at address: 0x200013877d80 with size: 0.000244 MiB 00:05:40.767 element at address: 0x200013877e80 with size: 0.000244 MiB 00:05:40.767 element at address: 0x200013877f80 with size: 0.000244 MiB 00:05:40.767 element at address: 0x200013878080 with size: 0.000244 MiB 00:05:40.767 element at address: 0x200013878180 with size: 0.000244 MiB 00:05:40.767 element at address: 0x200013878280 with size: 0.000244 MiB 00:05:40.767 element at address: 0x200013878380 with size: 0.000244 MiB 00:05:40.767 element at address: 0x200013878480 with size: 0.000244 MiB 00:05:40.767 element at address: 0x200013878580 with size: 0.000244 MiB 00:05:40.767 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:05:40.767 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:40.767 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:05:40.767 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:40.768 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:05:40.768 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x200019abc680 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:05:40.768 element at address: 0x200028463f40 with size: 0.000244 MiB 00:05:40.768 element at address: 0x200028464040 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846af80 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846b080 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846b180 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846b280 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846b380 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846b480 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846b580 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846b680 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846b780 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846b880 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846b980 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846be80 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846c080 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846c180 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846c280 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846c380 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846c480 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846c580 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846c680 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846c780 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846c880 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846c980 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846d080 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846d180 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846d280 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846d380 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846d480 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846d580 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846d680 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846d780 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846d880 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846d980 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846da80 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846db80 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846de80 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846df80 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846e080 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846e180 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846e280 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846e380 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846e480 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846e580 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846e680 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846e780 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846e880 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846e980 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846f080 with size: 0.000244 MiB 00:05:40.768 element at address: 0x20002846f180 with size: 0.000244 MiB 00:05:40.769 element at address: 0x20002846f280 with size: 0.000244 MiB 00:05:40.769 element at address: 0x20002846f380 with size: 0.000244 MiB 00:05:40.769 element at address: 0x20002846f480 with size: 0.000244 MiB 00:05:40.769 element at address: 0x20002846f580 with size: 0.000244 MiB 00:05:40.769 element at address: 0x20002846f680 with size: 0.000244 MiB 00:05:40.769 element at address: 0x20002846f780 with size: 0.000244 MiB 00:05:40.769 element at address: 0x20002846f880 with size: 0.000244 MiB 00:05:40.769 element at address: 0x20002846f980 with size: 0.000244 MiB 00:05:40.769 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:05:40.769 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:05:40.769 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:05:40.769 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:05:40.769 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:05:40.769 list of memzone associated elements. size: 602.264404 MiB 00:05:40.769 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:05:40.769 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:40.769 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:05:40.769 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:40.769 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:05:40.769 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_57515_0 00:05:40.769 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:05:40.769 associated memzone info: size: 48.002930 MiB name: MP_evtpool_57515_0 00:05:40.769 element at address: 0x200003fff340 with size: 48.003113 MiB 00:05:40.769 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57515_0 00:05:40.769 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:05:40.769 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:40.769 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:05:40.769 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:40.769 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:05:40.769 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_57515 00:05:40.769 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:05:40.769 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57515 00:05:40.769 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:40.769 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57515 00:05:40.769 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:40.769 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:40.769 element at address: 0x200019abc780 with size: 1.008179 MiB 00:05:40.769 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:40.769 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:40.769 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:40.769 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:05:40.769 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:40.769 element at address: 0x200003eff100 with size: 1.000549 MiB 00:05:40.769 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57515 00:05:40.769 element at address: 0x200003affb80 with size: 1.000549 MiB 00:05:40.769 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57515 00:05:40.769 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:05:40.769 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57515 00:05:40.769 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:05:40.769 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57515 00:05:40.769 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:05:40.769 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57515 00:05:40.769 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:05:40.769 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:40.769 element at address: 0x200013878680 with size: 0.500549 MiB 00:05:40.769 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:40.769 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:05:40.769 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:40.769 element at address: 0x200003adf740 with size: 0.125549 MiB 00:05:40.769 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57515 00:05:40.769 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:05:40.769 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:40.769 element at address: 0x200028464140 with size: 0.023804 MiB 00:05:40.769 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:40.769 element at address: 0x200003adb500 with size: 0.016174 MiB 00:05:40.769 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57515 00:05:40.769 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:05:40.769 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:40.769 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:05:40.769 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57515 00:05:40.769 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:05:40.769 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57515 00:05:40.769 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:05:40.769 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:40.769 04:45:47 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:40.769 04:45:47 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57515 00:05:40.769 04:45:47 -- common/autotest_common.sh@926 -- # '[' -z 57515 ']' 00:05:40.769 04:45:47 -- common/autotest_common.sh@930 -- # kill -0 57515 00:05:40.769 04:45:47 -- common/autotest_common.sh@931 -- # uname 00:05:40.769 04:45:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:40.769 04:45:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57515 00:05:40.769 04:45:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:40.769 killing process with pid 57515 00:05:40.769 04:45:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:40.769 04:45:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57515' 00:05:40.769 04:45:47 -- common/autotest_common.sh@945 -- # kill 57515 00:05:40.769 04:45:47 -- common/autotest_common.sh@950 -- # wait 57515 00:05:42.675 ************************************ 00:05:42.675 END TEST dpdk_mem_utility 00:05:42.675 ************************************ 00:05:42.675 00:05:42.675 real 0m3.648s 00:05:42.675 user 0m3.940s 00:05:42.675 sys 0m0.428s 00:05:42.675 04:45:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.675 04:45:49 -- common/autotest_common.sh@10 -- # set +x 00:05:42.675 04:45:49 -- spdk/autotest.sh@187 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:42.675 04:45:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:42.675 04:45:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:42.675 04:45:49 -- common/autotest_common.sh@10 -- # set +x 00:05:42.675 ************************************ 00:05:42.675 START TEST event 00:05:42.675 ************************************ 00:05:42.675 04:45:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:42.675 * Looking for test storage... 00:05:42.675 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:42.675 04:45:49 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:42.675 04:45:49 -- bdev/nbd_common.sh@6 -- # set -e 00:05:42.675 04:45:49 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:42.675 04:45:49 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:42.675 04:45:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:42.675 04:45:49 -- common/autotest_common.sh@10 -- # set +x 00:05:42.675 ************************************ 00:05:42.675 START TEST event_perf 00:05:42.675 ************************************ 00:05:42.675 04:45:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:42.675 Running I/O for 1 seconds...[2024-05-12 04:45:49.556339] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:42.675 [2024-05-12 04:45:49.556494] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57616 ] 00:05:42.675 [2024-05-12 04:45:49.729966] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:42.934 [2024-05-12 04:45:49.942474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.934 [2024-05-12 04:45:49.942613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:42.934 [2024-05-12 04:45:49.943064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:42.934 [2024-05-12 04:45:49.943067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.321 Running I/O for 1 seconds... 00:05:44.321 lcore 0: 199443 00:05:44.321 lcore 1: 199442 00:05:44.321 lcore 2: 199443 00:05:44.321 lcore 3: 199443 00:05:44.321 done. 00:05:44.321 00:05:44.321 real 0m1.747s 00:05:44.321 user 0m4.513s 00:05:44.321 sys 0m0.113s 00:05:44.321 04:45:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.321 04:45:51 -- common/autotest_common.sh@10 -- # set +x 00:05:44.321 ************************************ 00:05:44.321 END TEST event_perf 00:05:44.321 ************************************ 00:05:44.321 04:45:51 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:44.321 04:45:51 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:44.321 04:45:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:44.321 04:45:51 -- common/autotest_common.sh@10 -- # set +x 00:05:44.322 ************************************ 00:05:44.322 START TEST event_reactor 00:05:44.322 ************************************ 00:05:44.322 04:45:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:44.322 [2024-05-12 04:45:51.343973] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:44.322 [2024-05-12 04:45:51.344751] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57661 ] 00:05:44.581 [2024-05-12 04:45:51.497499] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.581 [2024-05-12 04:45:51.658082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.957 test_start 00:05:45.957 oneshot 00:05:45.957 tick 100 00:05:45.957 tick 100 00:05:45.957 tick 250 00:05:45.957 tick 100 00:05:45.957 tick 100 00:05:45.957 tick 250 00:05:45.957 tick 500 00:05:45.957 tick 100 00:05:45.957 tick 100 00:05:45.957 tick 100 00:05:45.957 tick 250 00:05:45.957 tick 100 00:05:45.957 tick 100 00:05:45.957 test_end 00:05:45.957 00:05:45.957 real 0m1.635s 00:05:45.957 user 0m1.454s 00:05:45.957 sys 0m0.071s 00:05:45.957 04:45:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.957 04:45:52 -- common/autotest_common.sh@10 -- # set +x 00:05:45.957 ************************************ 00:05:45.957 END TEST event_reactor 00:05:45.957 ************************************ 00:05:45.957 04:45:52 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:45.957 04:45:52 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:45.957 04:45:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:45.957 04:45:52 -- common/autotest_common.sh@10 -- # set +x 00:05:45.957 ************************************ 00:05:45.958 START TEST event_reactor_perf 00:05:45.958 ************************************ 00:05:45.958 04:45:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:45.958 [2024-05-12 04:45:53.044597] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:45.958 [2024-05-12 04:45:53.044753] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57692 ] 00:05:46.217 [2024-05-12 04:45:53.211679] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.475 [2024-05-12 04:45:53.362899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.853 test_start 00:05:47.853 test_end 00:05:47.853 Performance: 330766 events per second 00:05:47.853 00:05:47.853 real 0m1.659s 00:05:47.853 user 0m1.462s 00:05:47.853 sys 0m0.086s 00:05:47.853 04:45:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.853 04:45:54 -- common/autotest_common.sh@10 -- # set +x 00:05:47.853 ************************************ 00:05:47.853 END TEST event_reactor_perf 00:05:47.853 ************************************ 00:05:47.853 04:45:54 -- event/event.sh@49 -- # uname -s 00:05:47.853 04:45:54 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:47.853 04:45:54 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:47.853 04:45:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:47.853 04:45:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:47.853 04:45:54 -- common/autotest_common.sh@10 -- # set +x 00:05:47.853 ************************************ 00:05:47.853 START TEST event_scheduler 00:05:47.853 ************************************ 00:05:47.853 04:45:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:47.853 * Looking for test storage... 00:05:47.853 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:47.853 04:45:54 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:47.853 04:45:54 -- scheduler/scheduler.sh@35 -- # scheduler_pid=57759 00:05:47.853 04:45:54 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:47.853 04:45:54 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:47.853 04:45:54 -- scheduler/scheduler.sh@37 -- # waitforlisten 57759 00:05:47.853 04:45:54 -- common/autotest_common.sh@819 -- # '[' -z 57759 ']' 00:05:47.853 04:45:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.853 04:45:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:47.853 04:45:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.853 04:45:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:47.853 04:45:54 -- common/autotest_common.sh@10 -- # set +x 00:05:47.853 [2024-05-12 04:45:54.891464] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:47.853 [2024-05-12 04:45:54.891653] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57759 ] 00:05:48.111 [2024-05-12 04:45:55.069662] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:48.111 [2024-05-12 04:45:55.227086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.111 [2024-05-12 04:45:55.227243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.111 [2024-05-12 04:45:55.227806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:48.111 [2024-05-12 04:45:55.227807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:48.678 04:45:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:48.678 04:45:55 -- common/autotest_common.sh@852 -- # return 0 00:05:48.678 04:45:55 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:48.678 04:45:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:48.678 04:45:55 -- common/autotest_common.sh@10 -- # set +x 00:05:48.678 POWER: Env isn't set yet! 00:05:48.678 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:48.678 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:48.678 POWER: Cannot set governor of lcore 0 to userspace 00:05:48.678 POWER: Attempting to initialise PSTAT power management... 00:05:48.678 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:48.678 POWER: Cannot set governor of lcore 0 to performance 00:05:48.678 POWER: Attempting to initialise AMD PSTATE power management... 00:05:48.678 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:48.678 POWER: Cannot set governor of lcore 0 to userspace 00:05:48.678 POWER: Attempting to initialise CPPC power management... 00:05:48.678 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:48.678 POWER: Cannot set governor of lcore 0 to userspace 00:05:48.678 POWER: Attempting to initialise VM power management... 00:05:48.678 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:48.678 POWER: Unable to set Power Management Environment for lcore 0 00:05:48.678 [2024-05-12 04:45:55.765573] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:05:48.678 [2024-05-12 04:45:55.765614] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:05:48.678 [2024-05-12 04:45:55.765644] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:05:48.678 04:45:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:48.678 04:45:55 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:48.678 04:45:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:48.678 04:45:55 -- common/autotest_common.sh@10 -- # set +x 00:05:48.937 [2024-05-12 04:45:56.006148] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:48.937 04:45:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:48.937 04:45:56 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:48.937 04:45:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:48.937 04:45:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:48.937 04:45:56 -- common/autotest_common.sh@10 -- # set +x 00:05:48.937 ************************************ 00:05:48.937 START TEST scheduler_create_thread 00:05:48.937 ************************************ 00:05:48.937 04:45:56 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:05:48.937 04:45:56 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:48.937 04:45:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:48.937 04:45:56 -- common/autotest_common.sh@10 -- # set +x 00:05:48.937 2 00:05:48.937 04:45:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:48.937 04:45:56 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:48.937 04:45:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:48.937 04:45:56 -- common/autotest_common.sh@10 -- # set +x 00:05:48.937 3 00:05:48.937 04:45:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:48.937 04:45:56 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:48.937 04:45:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:48.937 04:45:56 -- common/autotest_common.sh@10 -- # set +x 00:05:48.937 4 00:05:48.937 04:45:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:48.937 04:45:56 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:48.937 04:45:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:48.937 04:45:56 -- common/autotest_common.sh@10 -- # set +x 00:05:48.937 5 00:05:48.937 04:45:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:48.937 04:45:56 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:48.937 04:45:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:48.937 04:45:56 -- common/autotest_common.sh@10 -- # set +x 00:05:49.196 6 00:05:49.196 04:45:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:49.196 04:45:56 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:49.196 04:45:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:49.196 04:45:56 -- common/autotest_common.sh@10 -- # set +x 00:05:49.196 7 00:05:49.196 04:45:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:49.196 04:45:56 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:49.196 04:45:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:49.196 04:45:56 -- common/autotest_common.sh@10 -- # set +x 00:05:49.196 8 00:05:49.196 04:45:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:49.196 04:45:56 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:49.196 04:45:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:49.196 04:45:56 -- common/autotest_common.sh@10 -- # set +x 00:05:49.196 9 00:05:49.196 04:45:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:49.196 04:45:56 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:49.196 04:45:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:49.196 04:45:56 -- common/autotest_common.sh@10 -- # set +x 00:05:49.196 10 00:05:49.196 04:45:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:49.196 04:45:56 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:49.196 04:45:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:49.196 04:45:56 -- common/autotest_common.sh@10 -- # set +x 00:05:49.196 04:45:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:49.196 04:45:56 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:49.196 04:45:56 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:49.196 04:45:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:49.196 04:45:56 -- common/autotest_common.sh@10 -- # set +x 00:05:49.196 04:45:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:49.196 04:45:56 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:49.196 04:45:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:49.196 04:45:56 -- common/autotest_common.sh@10 -- # set +x 00:05:50.132 04:45:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:50.132 04:45:57 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:50.132 04:45:57 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:50.132 04:45:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:50.132 04:45:57 -- common/autotest_common.sh@10 -- # set +x 00:05:51.069 04:45:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:51.069 00:05:51.069 real 0m2.136s 00:05:51.069 user 0m0.021s 00:05:51.069 sys 0m0.004s 00:05:51.069 04:45:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.069 04:45:58 -- common/autotest_common.sh@10 -- # set +x 00:05:51.069 ************************************ 00:05:51.069 END TEST scheduler_create_thread 00:05:51.069 ************************************ 00:05:51.329 04:45:58 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:51.329 04:45:58 -- scheduler/scheduler.sh@46 -- # killprocess 57759 00:05:51.329 04:45:58 -- common/autotest_common.sh@926 -- # '[' -z 57759 ']' 00:05:51.329 04:45:58 -- common/autotest_common.sh@930 -- # kill -0 57759 00:05:51.329 04:45:58 -- common/autotest_common.sh@931 -- # uname 00:05:51.329 04:45:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:51.329 04:45:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57759 00:05:51.329 04:45:58 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:05:51.329 04:45:58 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:05:51.329 killing process with pid 57759 00:05:51.329 04:45:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57759' 00:05:51.329 04:45:58 -- common/autotest_common.sh@945 -- # kill 57759 00:05:51.329 04:45:58 -- common/autotest_common.sh@950 -- # wait 57759 00:05:51.588 [2024-05-12 04:45:58.634561] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:52.525 00:05:52.525 real 0m4.839s 00:05:52.525 user 0m8.098s 00:05:52.525 sys 0m0.387s 00:05:52.525 04:45:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.525 04:45:59 -- common/autotest_common.sh@10 -- # set +x 00:05:52.525 ************************************ 00:05:52.525 END TEST event_scheduler 00:05:52.525 ************************************ 00:05:52.525 04:45:59 -- event/event.sh@51 -- # modprobe -n nbd 00:05:52.525 04:45:59 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:52.525 04:45:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:52.525 04:45:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:52.525 04:45:59 -- common/autotest_common.sh@10 -- # set +x 00:05:52.525 ************************************ 00:05:52.525 START TEST app_repeat 00:05:52.525 ************************************ 00:05:52.525 04:45:59 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:05:52.525 04:45:59 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.525 04:45:59 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.525 04:45:59 -- event/event.sh@13 -- # local nbd_list 00:05:52.525 04:45:59 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:52.525 04:45:59 -- event/event.sh@14 -- # local bdev_list 00:05:52.525 04:45:59 -- event/event.sh@15 -- # local repeat_times=4 00:05:52.525 04:45:59 -- event/event.sh@17 -- # modprobe nbd 00:05:52.525 04:45:59 -- event/event.sh@19 -- # repeat_pid=57865 00:05:52.525 04:45:59 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:52.525 04:45:59 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:52.525 Process app_repeat pid: 57865 00:05:52.525 04:45:59 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 57865' 00:05:52.525 04:45:59 -- event/event.sh@23 -- # for i in {0..2} 00:05:52.525 spdk_app_start Round 0 00:05:52.525 04:45:59 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:52.525 04:45:59 -- event/event.sh@25 -- # waitforlisten 57865 /var/tmp/spdk-nbd.sock 00:05:52.525 04:45:59 -- common/autotest_common.sh@819 -- # '[' -z 57865 ']' 00:05:52.525 04:45:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:52.525 04:45:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:52.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:52.525 04:45:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:52.526 04:45:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:52.526 04:45:59 -- common/autotest_common.sh@10 -- # set +x 00:05:52.783 [2024-05-12 04:45:59.664787] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:52.783 [2024-05-12 04:45:59.664930] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57865 ] 00:05:52.783 [2024-05-12 04:45:59.819126] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:53.041 [2024-05-12 04:45:59.984752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.041 [2024-05-12 04:45:59.984767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.609 04:46:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:53.609 04:46:00 -- common/autotest_common.sh@852 -- # return 0 00:05:53.609 04:46:00 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:53.867 Malloc0 00:05:53.867 04:46:00 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:54.126 Malloc1 00:05:54.126 04:46:01 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.126 04:46:01 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.126 04:46:01 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.126 04:46:01 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:54.126 04:46:01 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.126 04:46:01 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:54.126 04:46:01 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.126 04:46:01 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.126 04:46:01 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.126 04:46:01 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:54.126 04:46:01 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.126 04:46:01 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:54.126 04:46:01 -- bdev/nbd_common.sh@12 -- # local i 00:05:54.126 04:46:01 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:54.126 04:46:01 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.126 04:46:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:54.385 /dev/nbd0 00:05:54.385 04:46:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:54.385 04:46:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:54.385 04:46:01 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:54.385 04:46:01 -- common/autotest_common.sh@857 -- # local i 00:05:54.385 04:46:01 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:54.385 04:46:01 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:54.385 04:46:01 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:54.385 04:46:01 -- common/autotest_common.sh@861 -- # break 00:05:54.385 04:46:01 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:54.385 04:46:01 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:54.385 04:46:01 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.385 1+0 records in 00:05:54.385 1+0 records out 00:05:54.385 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000254525 s, 16.1 MB/s 00:05:54.385 04:46:01 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.385 04:46:01 -- common/autotest_common.sh@874 -- # size=4096 00:05:54.385 04:46:01 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.385 04:46:01 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:54.385 04:46:01 -- common/autotest_common.sh@877 -- # return 0 00:05:54.385 04:46:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.385 04:46:01 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.385 04:46:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:54.643 /dev/nbd1 00:05:54.643 04:46:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:54.643 04:46:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:54.643 04:46:01 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:54.643 04:46:01 -- common/autotest_common.sh@857 -- # local i 00:05:54.644 04:46:01 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:54.644 04:46:01 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:54.644 04:46:01 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:54.644 04:46:01 -- common/autotest_common.sh@861 -- # break 00:05:54.644 04:46:01 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:54.644 04:46:01 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:54.644 04:46:01 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.644 1+0 records in 00:05:54.644 1+0 records out 00:05:54.644 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268419 s, 15.3 MB/s 00:05:54.644 04:46:01 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.644 04:46:01 -- common/autotest_common.sh@874 -- # size=4096 00:05:54.644 04:46:01 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.644 04:46:01 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:54.644 04:46:01 -- common/autotest_common.sh@877 -- # return 0 00:05:54.644 04:46:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.644 04:46:01 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.644 04:46:01 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:54.644 04:46:01 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.644 04:46:01 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:54.902 04:46:01 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:54.902 { 00:05:54.902 "nbd_device": "/dev/nbd0", 00:05:54.902 "bdev_name": "Malloc0" 00:05:54.902 }, 00:05:54.902 { 00:05:54.902 "nbd_device": "/dev/nbd1", 00:05:54.902 "bdev_name": "Malloc1" 00:05:54.902 } 00:05:54.902 ]' 00:05:54.902 04:46:01 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:54.902 { 00:05:54.902 "nbd_device": "/dev/nbd0", 00:05:54.902 "bdev_name": "Malloc0" 00:05:54.902 }, 00:05:54.902 { 00:05:54.902 "nbd_device": "/dev/nbd1", 00:05:54.902 "bdev_name": "Malloc1" 00:05:54.902 } 00:05:54.902 ]' 00:05:54.902 04:46:01 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:54.902 04:46:01 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:54.902 /dev/nbd1' 00:05:54.902 04:46:01 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:54.902 /dev/nbd1' 00:05:54.902 04:46:01 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:54.902 04:46:01 -- bdev/nbd_common.sh@65 -- # count=2 00:05:54.902 04:46:01 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:54.902 04:46:01 -- bdev/nbd_common.sh@95 -- # count=2 00:05:54.902 04:46:01 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:54.902 04:46:01 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:54.902 04:46:01 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.902 04:46:01 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:54.902 04:46:01 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:54.902 04:46:01 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:54.902 04:46:01 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:54.902 04:46:01 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:54.902 256+0 records in 00:05:54.902 256+0 records out 00:05:54.902 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106091 s, 98.8 MB/s 00:05:54.902 04:46:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:54.902 04:46:01 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:54.902 256+0 records in 00:05:54.902 256+0 records out 00:05:54.902 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0226663 s, 46.3 MB/s 00:05:54.902 04:46:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:54.902 04:46:01 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:54.902 256+0 records in 00:05:54.902 256+0 records out 00:05:54.902 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0364966 s, 28.7 MB/s 00:05:54.902 04:46:01 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:54.902 04:46:01 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.902 04:46:01 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:54.902 04:46:01 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:54.902 04:46:01 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:54.902 04:46:01 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:54.902 04:46:01 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:54.902 04:46:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:54.902 04:46:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:54.902 04:46:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:54.902 04:46:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:54.902 04:46:01 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:54.902 04:46:01 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:54.902 04:46:01 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.902 04:46:01 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.902 04:46:01 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:54.902 04:46:01 -- bdev/nbd_common.sh@51 -- # local i 00:05:54.902 04:46:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.902 04:46:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:55.160 04:46:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:55.160 04:46:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:55.160 04:46:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:55.160 04:46:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.160 04:46:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.160 04:46:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:55.160 04:46:02 -- bdev/nbd_common.sh@41 -- # break 00:05:55.160 04:46:02 -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.160 04:46:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.160 04:46:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:55.418 04:46:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:55.418 04:46:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:55.418 04:46:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:55.418 04:46:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.418 04:46:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.418 04:46:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:55.418 04:46:02 -- bdev/nbd_common.sh@41 -- # break 00:05:55.418 04:46:02 -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.418 04:46:02 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.418 04:46:02 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.418 04:46:02 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.676 04:46:02 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:55.676 04:46:02 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:55.676 04:46:02 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.676 04:46:02 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:55.676 04:46:02 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.676 04:46:02 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:55.676 04:46:02 -- bdev/nbd_common.sh@65 -- # true 00:05:55.676 04:46:02 -- bdev/nbd_common.sh@65 -- # count=0 00:05:55.676 04:46:02 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:55.676 04:46:02 -- bdev/nbd_common.sh@104 -- # count=0 00:05:55.676 04:46:02 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:55.676 04:46:02 -- bdev/nbd_common.sh@109 -- # return 0 00:05:55.676 04:46:02 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:56.242 04:46:03 -- event/event.sh@35 -- # sleep 3 00:05:57.179 [2024-05-12 04:46:04.134762] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:57.179 [2024-05-12 04:46:04.285293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.179 [2024-05-12 04:46:04.285297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.438 [2024-05-12 04:46:04.435858] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:57.438 [2024-05-12 04:46:04.435971] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:59.380 04:46:06 -- event/event.sh@23 -- # for i in {0..2} 00:05:59.380 spdk_app_start Round 1 00:05:59.380 04:46:06 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:59.380 04:46:06 -- event/event.sh@25 -- # waitforlisten 57865 /var/tmp/spdk-nbd.sock 00:05:59.380 04:46:06 -- common/autotest_common.sh@819 -- # '[' -z 57865 ']' 00:05:59.380 04:46:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:59.380 04:46:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:59.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:59.380 04:46:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:59.380 04:46:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:59.380 04:46:06 -- common/autotest_common.sh@10 -- # set +x 00:05:59.380 04:46:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:59.380 04:46:06 -- common/autotest_common.sh@852 -- # return 0 00:05:59.380 04:46:06 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.639 Malloc0 00:05:59.639 04:46:06 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.898 Malloc1 00:05:59.898 04:46:06 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:59.898 04:46:06 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.898 04:46:06 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.898 04:46:06 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:59.898 04:46:06 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.898 04:46:06 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:59.898 04:46:06 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:59.898 04:46:06 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.898 04:46:06 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.898 04:46:06 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:59.898 04:46:06 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.898 04:46:06 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:59.898 04:46:06 -- bdev/nbd_common.sh@12 -- # local i 00:05:59.898 04:46:06 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:59.898 04:46:06 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.898 04:46:06 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:00.157 /dev/nbd0 00:06:00.157 04:46:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:00.157 04:46:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:00.157 04:46:07 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:00.157 04:46:07 -- common/autotest_common.sh@857 -- # local i 00:06:00.157 04:46:07 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:00.157 04:46:07 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:00.157 04:46:07 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:00.157 04:46:07 -- common/autotest_common.sh@861 -- # break 00:06:00.157 04:46:07 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:00.157 04:46:07 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:00.157 04:46:07 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.157 1+0 records in 00:06:00.157 1+0 records out 00:06:00.157 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000364436 s, 11.2 MB/s 00:06:00.157 04:46:07 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.157 04:46:07 -- common/autotest_common.sh@874 -- # size=4096 00:06:00.157 04:46:07 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.157 04:46:07 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:00.157 04:46:07 -- common/autotest_common.sh@877 -- # return 0 00:06:00.157 04:46:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.157 04:46:07 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.157 04:46:07 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:00.416 /dev/nbd1 00:06:00.416 04:46:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:00.416 04:46:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:00.416 04:46:07 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:00.416 04:46:07 -- common/autotest_common.sh@857 -- # local i 00:06:00.416 04:46:07 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:00.416 04:46:07 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:00.416 04:46:07 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:00.416 04:46:07 -- common/autotest_common.sh@861 -- # break 00:06:00.416 04:46:07 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:00.416 04:46:07 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:00.416 04:46:07 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.416 1+0 records in 00:06:00.416 1+0 records out 00:06:00.416 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000383561 s, 10.7 MB/s 00:06:00.416 04:46:07 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.416 04:46:07 -- common/autotest_common.sh@874 -- # size=4096 00:06:00.416 04:46:07 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.416 04:46:07 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:00.416 04:46:07 -- common/autotest_common.sh@877 -- # return 0 00:06:00.416 04:46:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.416 04:46:07 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.416 04:46:07 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.416 04:46:07 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.416 04:46:07 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.675 04:46:07 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:00.675 { 00:06:00.675 "nbd_device": "/dev/nbd0", 00:06:00.675 "bdev_name": "Malloc0" 00:06:00.675 }, 00:06:00.675 { 00:06:00.675 "nbd_device": "/dev/nbd1", 00:06:00.675 "bdev_name": "Malloc1" 00:06:00.675 } 00:06:00.675 ]' 00:06:00.675 04:46:07 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:00.675 { 00:06:00.675 "nbd_device": "/dev/nbd0", 00:06:00.675 "bdev_name": "Malloc0" 00:06:00.675 }, 00:06:00.675 { 00:06:00.675 "nbd_device": "/dev/nbd1", 00:06:00.675 "bdev_name": "Malloc1" 00:06:00.675 } 00:06:00.675 ]' 00:06:00.675 04:46:07 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.934 04:46:07 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:00.934 /dev/nbd1' 00:06:00.934 04:46:07 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:00.934 /dev/nbd1' 00:06:00.934 04:46:07 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.934 04:46:07 -- bdev/nbd_common.sh@65 -- # count=2 00:06:00.934 04:46:07 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:00.934 04:46:07 -- bdev/nbd_common.sh@95 -- # count=2 00:06:00.934 04:46:07 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:00.934 04:46:07 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:00.934 04:46:07 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.934 04:46:07 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.934 04:46:07 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:00.934 04:46:07 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:00.934 04:46:07 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:00.935 04:46:07 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:00.935 256+0 records in 00:06:00.935 256+0 records out 00:06:00.935 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00748192 s, 140 MB/s 00:06:00.935 04:46:07 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.935 04:46:07 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:00.935 256+0 records in 00:06:00.935 256+0 records out 00:06:00.935 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.028815 s, 36.4 MB/s 00:06:00.935 04:46:07 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.935 04:46:07 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:00.935 256+0 records in 00:06:00.935 256+0 records out 00:06:00.935 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0321585 s, 32.6 MB/s 00:06:00.935 04:46:07 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:00.935 04:46:07 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.935 04:46:07 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.935 04:46:07 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:00.935 04:46:07 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:00.935 04:46:07 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:00.935 04:46:07 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:00.935 04:46:07 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.935 04:46:07 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:00.935 04:46:07 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.935 04:46:07 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:00.935 04:46:07 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:00.935 04:46:07 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:00.935 04:46:07 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.935 04:46:07 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.935 04:46:07 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:00.935 04:46:07 -- bdev/nbd_common.sh@51 -- # local i 00:06:00.935 04:46:07 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.935 04:46:07 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:01.194 04:46:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:01.194 04:46:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:01.194 04:46:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:01.194 04:46:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.194 04:46:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.194 04:46:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:01.194 04:46:08 -- bdev/nbd_common.sh@41 -- # break 00:06:01.194 04:46:08 -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.194 04:46:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.194 04:46:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:01.454 04:46:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:01.454 04:46:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:01.454 04:46:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:01.454 04:46:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.454 04:46:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.454 04:46:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:01.454 04:46:08 -- bdev/nbd_common.sh@41 -- # break 00:06:01.454 04:46:08 -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.454 04:46:08 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.454 04:46:08 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.454 04:46:08 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.713 04:46:08 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:01.713 04:46:08 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:01.713 04:46:08 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.713 04:46:08 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:01.713 04:46:08 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:01.713 04:46:08 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.713 04:46:08 -- bdev/nbd_common.sh@65 -- # true 00:06:01.713 04:46:08 -- bdev/nbd_common.sh@65 -- # count=0 00:06:01.713 04:46:08 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:01.713 04:46:08 -- bdev/nbd_common.sh@104 -- # count=0 00:06:01.713 04:46:08 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:01.713 04:46:08 -- bdev/nbd_common.sh@109 -- # return 0 00:06:01.713 04:46:08 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:01.972 04:46:09 -- event/event.sh@35 -- # sleep 3 00:06:02.909 [2024-05-12 04:46:09.992807] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:03.168 [2024-05-12 04:46:10.143468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.168 [2024-05-12 04:46:10.143472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.168 [2024-05-12 04:46:10.284479] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:03.168 [2024-05-12 04:46:10.284577] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:05.072 spdk_app_start Round 2 00:06:05.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:05.073 04:46:12 -- event/event.sh@23 -- # for i in {0..2} 00:06:05.073 04:46:12 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:05.073 04:46:12 -- event/event.sh@25 -- # waitforlisten 57865 /var/tmp/spdk-nbd.sock 00:06:05.073 04:46:12 -- common/autotest_common.sh@819 -- # '[' -z 57865 ']' 00:06:05.073 04:46:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:05.073 04:46:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:05.073 04:46:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:05.073 04:46:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:05.073 04:46:12 -- common/autotest_common.sh@10 -- # set +x 00:06:05.330 04:46:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:05.330 04:46:12 -- common/autotest_common.sh@852 -- # return 0 00:06:05.330 04:46:12 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:05.587 Malloc0 00:06:05.587 04:46:12 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:05.846 Malloc1 00:06:05.846 04:46:12 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.846 04:46:12 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.846 04:46:12 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.846 04:46:12 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:05.846 04:46:12 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.846 04:46:12 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:05.846 04:46:12 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.846 04:46:12 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.846 04:46:12 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.846 04:46:12 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:05.846 04:46:12 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.846 04:46:12 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:05.846 04:46:12 -- bdev/nbd_common.sh@12 -- # local i 00:06:05.846 04:46:12 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:05.846 04:46:12 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.846 04:46:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:06.105 /dev/nbd0 00:06:06.105 04:46:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:06.105 04:46:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:06.105 04:46:13 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:06.105 04:46:13 -- common/autotest_common.sh@857 -- # local i 00:06:06.105 04:46:13 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:06.105 04:46:13 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:06.105 04:46:13 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:06.105 04:46:13 -- common/autotest_common.sh@861 -- # break 00:06:06.105 04:46:13 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:06.105 04:46:13 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:06.105 04:46:13 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:06.105 1+0 records in 00:06:06.105 1+0 records out 00:06:06.105 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289308 s, 14.2 MB/s 00:06:06.105 04:46:13 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:06.105 04:46:13 -- common/autotest_common.sh@874 -- # size=4096 00:06:06.105 04:46:13 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:06.105 04:46:13 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:06.105 04:46:13 -- common/autotest_common.sh@877 -- # return 0 00:06:06.105 04:46:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.105 04:46:13 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.105 04:46:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:06.364 /dev/nbd1 00:06:06.364 04:46:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:06.364 04:46:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:06.364 04:46:13 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:06.364 04:46:13 -- common/autotest_common.sh@857 -- # local i 00:06:06.364 04:46:13 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:06.364 04:46:13 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:06.364 04:46:13 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:06.364 04:46:13 -- common/autotest_common.sh@861 -- # break 00:06:06.364 04:46:13 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:06.364 04:46:13 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:06.364 04:46:13 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:06.364 1+0 records in 00:06:06.364 1+0 records out 00:06:06.364 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000308136 s, 13.3 MB/s 00:06:06.364 04:46:13 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:06.364 04:46:13 -- common/autotest_common.sh@874 -- # size=4096 00:06:06.364 04:46:13 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:06.364 04:46:13 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:06.364 04:46:13 -- common/autotest_common.sh@877 -- # return 0 00:06:06.364 04:46:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.364 04:46:13 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.364 04:46:13 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.364 04:46:13 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.364 04:46:13 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.624 04:46:13 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:06.624 { 00:06:06.624 "nbd_device": "/dev/nbd0", 00:06:06.624 "bdev_name": "Malloc0" 00:06:06.624 }, 00:06:06.624 { 00:06:06.624 "nbd_device": "/dev/nbd1", 00:06:06.624 "bdev_name": "Malloc1" 00:06:06.624 } 00:06:06.624 ]' 00:06:06.624 04:46:13 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:06.624 { 00:06:06.624 "nbd_device": "/dev/nbd0", 00:06:06.624 "bdev_name": "Malloc0" 00:06:06.624 }, 00:06:06.624 { 00:06:06.624 "nbd_device": "/dev/nbd1", 00:06:06.624 "bdev_name": "Malloc1" 00:06:06.624 } 00:06:06.624 ]' 00:06:06.624 04:46:13 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.624 04:46:13 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:06.624 /dev/nbd1' 00:06:06.624 04:46:13 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:06.624 /dev/nbd1' 00:06:06.624 04:46:13 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.624 04:46:13 -- bdev/nbd_common.sh@65 -- # count=2 00:06:06.624 04:46:13 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:06.624 04:46:13 -- bdev/nbd_common.sh@95 -- # count=2 00:06:06.624 04:46:13 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:06.624 04:46:13 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:06.624 04:46:13 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.624 04:46:13 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.624 04:46:13 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:06.624 04:46:13 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:06.624 04:46:13 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:06.624 04:46:13 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:06.624 256+0 records in 00:06:06.624 256+0 records out 00:06:06.624 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00671157 s, 156 MB/s 00:06:06.624 04:46:13 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.624 04:46:13 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:06.883 256+0 records in 00:06:06.883 256+0 records out 00:06:06.883 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0252354 s, 41.6 MB/s 00:06:06.883 04:46:13 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.883 04:46:13 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:06.883 256+0 records in 00:06:06.883 256+0 records out 00:06:06.883 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0362873 s, 28.9 MB/s 00:06:06.883 04:46:13 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:06.883 04:46:13 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.883 04:46:13 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.883 04:46:13 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:06.883 04:46:13 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:06.883 04:46:13 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:06.883 04:46:13 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:06.883 04:46:13 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.883 04:46:13 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:06.883 04:46:13 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.883 04:46:13 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:06.883 04:46:13 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:06.883 04:46:13 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:06.883 04:46:13 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.883 04:46:13 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.883 04:46:13 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:06.883 04:46:13 -- bdev/nbd_common.sh@51 -- # local i 00:06:06.883 04:46:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.883 04:46:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:07.142 04:46:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:07.142 04:46:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:07.142 04:46:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:07.142 04:46:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.142 04:46:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.142 04:46:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:07.142 04:46:14 -- bdev/nbd_common.sh@41 -- # break 00:06:07.142 04:46:14 -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.142 04:46:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.142 04:46:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:07.400 04:46:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:07.400 04:46:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:07.400 04:46:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:07.400 04:46:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.400 04:46:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.401 04:46:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:07.401 04:46:14 -- bdev/nbd_common.sh@41 -- # break 00:06:07.401 04:46:14 -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.401 04:46:14 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:07.401 04:46:14 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.401 04:46:14 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:07.401 04:46:14 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:07.401 04:46:14 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:07.401 04:46:14 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:07.659 04:46:14 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:07.659 04:46:14 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:07.659 04:46:14 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:07.659 04:46:14 -- bdev/nbd_common.sh@65 -- # true 00:06:07.659 04:46:14 -- bdev/nbd_common.sh@65 -- # count=0 00:06:07.659 04:46:14 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:07.659 04:46:14 -- bdev/nbd_common.sh@104 -- # count=0 00:06:07.659 04:46:14 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:07.659 04:46:14 -- bdev/nbd_common.sh@109 -- # return 0 00:06:07.659 04:46:14 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:07.918 04:46:14 -- event/event.sh@35 -- # sleep 3 00:06:08.855 [2024-05-12 04:46:15.900053] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:09.115 [2024-05-12 04:46:16.056878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.115 [2024-05-12 04:46:16.056881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.115 [2024-05-12 04:46:16.204477] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:09.115 [2024-05-12 04:46:16.204539] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:11.018 04:46:17 -- event/event.sh@38 -- # waitforlisten 57865 /var/tmp/spdk-nbd.sock 00:06:11.018 04:46:17 -- common/autotest_common.sh@819 -- # '[' -z 57865 ']' 00:06:11.018 04:46:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:11.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:11.018 04:46:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:11.018 04:46:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:11.018 04:46:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:11.018 04:46:17 -- common/autotest_common.sh@10 -- # set +x 00:06:11.278 04:46:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:11.278 04:46:18 -- common/autotest_common.sh@852 -- # return 0 00:06:11.278 04:46:18 -- event/event.sh@39 -- # killprocess 57865 00:06:11.278 04:46:18 -- common/autotest_common.sh@926 -- # '[' -z 57865 ']' 00:06:11.278 04:46:18 -- common/autotest_common.sh@930 -- # kill -0 57865 00:06:11.278 04:46:18 -- common/autotest_common.sh@931 -- # uname 00:06:11.278 04:46:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:11.278 04:46:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57865 00:06:11.278 04:46:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:11.278 killing process with pid 57865 00:06:11.278 04:46:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:11.278 04:46:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57865' 00:06:11.278 04:46:18 -- common/autotest_common.sh@945 -- # kill 57865 00:06:11.278 04:46:18 -- common/autotest_common.sh@950 -- # wait 57865 00:06:12.220 spdk_app_start is called in Round 0. 00:06:12.220 Shutdown signal received, stop current app iteration 00:06:12.220 Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 reinitialization... 00:06:12.220 spdk_app_start is called in Round 1. 00:06:12.220 Shutdown signal received, stop current app iteration 00:06:12.220 Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 reinitialization... 00:06:12.220 spdk_app_start is called in Round 2. 00:06:12.220 Shutdown signal received, stop current app iteration 00:06:12.220 Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 reinitialization... 00:06:12.220 spdk_app_start is called in Round 3. 00:06:12.220 Shutdown signal received, stop current app iteration 00:06:12.220 ************************************ 00:06:12.220 END TEST app_repeat 00:06:12.220 ************************************ 00:06:12.220 04:46:19 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:12.220 04:46:19 -- event/event.sh@42 -- # return 0 00:06:12.220 00:06:12.220 real 0m19.502s 00:06:12.220 user 0m42.144s 00:06:12.220 sys 0m2.495s 00:06:12.220 04:46:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.220 04:46:19 -- common/autotest_common.sh@10 -- # set +x 00:06:12.220 04:46:19 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:12.220 04:46:19 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:12.220 04:46:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:12.220 04:46:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:12.221 04:46:19 -- common/autotest_common.sh@10 -- # set +x 00:06:12.221 ************************************ 00:06:12.221 START TEST cpu_locks 00:06:12.221 ************************************ 00:06:12.221 04:46:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:12.221 * Looking for test storage... 00:06:12.221 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:12.221 04:46:19 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:12.221 04:46:19 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:12.221 04:46:19 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:12.221 04:46:19 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:12.221 04:46:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:12.221 04:46:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:12.221 04:46:19 -- common/autotest_common.sh@10 -- # set +x 00:06:12.221 ************************************ 00:06:12.221 START TEST default_locks 00:06:12.221 ************************************ 00:06:12.221 04:46:19 -- common/autotest_common.sh@1104 -- # default_locks 00:06:12.221 04:46:19 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58299 00:06:12.221 04:46:19 -- event/cpu_locks.sh@47 -- # waitforlisten 58299 00:06:12.221 04:46:19 -- common/autotest_common.sh@819 -- # '[' -z 58299 ']' 00:06:12.221 04:46:19 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:12.221 04:46:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.221 04:46:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:12.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.221 04:46:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.221 04:46:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:12.221 04:46:19 -- common/autotest_common.sh@10 -- # set +x 00:06:12.490 [2024-05-12 04:46:19.371415] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:12.490 [2024-05-12 04:46:19.371577] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58299 ] 00:06:12.490 [2024-05-12 04:46:19.538472] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.749 [2024-05-12 04:46:19.694786] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:12.749 [2024-05-12 04:46:19.694998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.125 04:46:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:14.125 04:46:21 -- common/autotest_common.sh@852 -- # return 0 00:06:14.126 04:46:21 -- event/cpu_locks.sh@49 -- # locks_exist 58299 00:06:14.126 04:46:21 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:14.126 04:46:21 -- event/cpu_locks.sh@22 -- # lslocks -p 58299 00:06:14.384 04:46:21 -- event/cpu_locks.sh@50 -- # killprocess 58299 00:06:14.384 04:46:21 -- common/autotest_common.sh@926 -- # '[' -z 58299 ']' 00:06:14.384 04:46:21 -- common/autotest_common.sh@930 -- # kill -0 58299 00:06:14.384 04:46:21 -- common/autotest_common.sh@931 -- # uname 00:06:14.384 04:46:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:14.384 04:46:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58299 00:06:14.384 04:46:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:14.384 killing process with pid 58299 00:06:14.384 04:46:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:14.384 04:46:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58299' 00:06:14.384 04:46:21 -- common/autotest_common.sh@945 -- # kill 58299 00:06:14.384 04:46:21 -- common/autotest_common.sh@950 -- # wait 58299 00:06:16.286 04:46:23 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58299 00:06:16.286 04:46:23 -- common/autotest_common.sh@640 -- # local es=0 00:06:16.286 04:46:23 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 58299 00:06:16.286 04:46:23 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:16.286 04:46:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:16.286 04:46:23 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:16.286 04:46:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:16.286 04:46:23 -- common/autotest_common.sh@643 -- # waitforlisten 58299 00:06:16.286 04:46:23 -- common/autotest_common.sh@819 -- # '[' -z 58299 ']' 00:06:16.286 04:46:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.286 04:46:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:16.286 04:46:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.286 04:46:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:16.286 04:46:23 -- common/autotest_common.sh@10 -- # set +x 00:06:16.286 ERROR: process (pid: 58299) is no longer running 00:06:16.286 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (58299) - No such process 00:06:16.286 04:46:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:16.286 04:46:23 -- common/autotest_common.sh@852 -- # return 1 00:06:16.286 04:46:23 -- common/autotest_common.sh@643 -- # es=1 00:06:16.286 04:46:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:16.286 04:46:23 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:16.286 04:46:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:16.286 04:46:23 -- event/cpu_locks.sh@54 -- # no_locks 00:06:16.286 04:46:23 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:16.286 04:46:23 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:16.286 04:46:23 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:16.286 00:06:16.286 real 0m3.906s 00:06:16.286 user 0m4.233s 00:06:16.286 sys 0m0.600s 00:06:16.286 ************************************ 00:06:16.286 END TEST default_locks 00:06:16.286 ************************************ 00:06:16.286 04:46:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.286 04:46:23 -- common/autotest_common.sh@10 -- # set +x 00:06:16.286 04:46:23 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:16.286 04:46:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:16.286 04:46:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:16.286 04:46:23 -- common/autotest_common.sh@10 -- # set +x 00:06:16.286 ************************************ 00:06:16.286 START TEST default_locks_via_rpc 00:06:16.286 ************************************ 00:06:16.286 04:46:23 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:06:16.286 04:46:23 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58375 00:06:16.286 04:46:23 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:16.286 04:46:23 -- event/cpu_locks.sh@63 -- # waitforlisten 58375 00:06:16.286 04:46:23 -- common/autotest_common.sh@819 -- # '[' -z 58375 ']' 00:06:16.286 04:46:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.286 04:46:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:16.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.286 04:46:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.286 04:46:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:16.286 04:46:23 -- common/autotest_common.sh@10 -- # set +x 00:06:16.286 [2024-05-12 04:46:23.301079] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:16.286 [2024-05-12 04:46:23.301247] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58375 ] 00:06:16.544 [2024-05-12 04:46:23.455470] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.544 [2024-05-12 04:46:23.617689] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:16.544 [2024-05-12 04:46:23.617898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.919 04:46:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:17.919 04:46:24 -- common/autotest_common.sh@852 -- # return 0 00:06:17.919 04:46:24 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:17.919 04:46:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:17.919 04:46:24 -- common/autotest_common.sh@10 -- # set +x 00:06:17.919 04:46:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:17.919 04:46:24 -- event/cpu_locks.sh@67 -- # no_locks 00:06:17.919 04:46:24 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:17.919 04:46:24 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:17.919 04:46:24 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:17.919 04:46:24 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:17.919 04:46:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:17.919 04:46:24 -- common/autotest_common.sh@10 -- # set +x 00:06:17.919 04:46:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:17.919 04:46:24 -- event/cpu_locks.sh@71 -- # locks_exist 58375 00:06:17.919 04:46:24 -- event/cpu_locks.sh@22 -- # lslocks -p 58375 00:06:17.919 04:46:24 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:18.178 04:46:25 -- event/cpu_locks.sh@73 -- # killprocess 58375 00:06:18.178 04:46:25 -- common/autotest_common.sh@926 -- # '[' -z 58375 ']' 00:06:18.178 04:46:25 -- common/autotest_common.sh@930 -- # kill -0 58375 00:06:18.178 04:46:25 -- common/autotest_common.sh@931 -- # uname 00:06:18.178 04:46:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:18.178 04:46:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58375 00:06:18.178 killing process with pid 58375 00:06:18.178 04:46:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:18.178 04:46:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:18.178 04:46:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58375' 00:06:18.178 04:46:25 -- common/autotest_common.sh@945 -- # kill 58375 00:06:18.178 04:46:25 -- common/autotest_common.sh@950 -- # wait 58375 00:06:20.083 00:06:20.083 real 0m3.792s 00:06:20.083 user 0m4.053s 00:06:20.083 sys 0m0.529s 00:06:20.083 04:46:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.083 ************************************ 00:06:20.083 END TEST default_locks_via_rpc 00:06:20.083 ************************************ 00:06:20.083 04:46:27 -- common/autotest_common.sh@10 -- # set +x 00:06:20.083 04:46:27 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:20.083 04:46:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:20.083 04:46:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:20.083 04:46:27 -- common/autotest_common.sh@10 -- # set +x 00:06:20.083 ************************************ 00:06:20.083 START TEST non_locking_app_on_locked_coremask 00:06:20.083 ************************************ 00:06:20.083 04:46:27 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:06:20.083 04:46:27 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58451 00:06:20.083 04:46:27 -- event/cpu_locks.sh@81 -- # waitforlisten 58451 /var/tmp/spdk.sock 00:06:20.083 04:46:27 -- common/autotest_common.sh@819 -- # '[' -z 58451 ']' 00:06:20.083 04:46:27 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:20.083 04:46:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.083 04:46:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:20.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.083 04:46:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.083 04:46:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:20.083 04:46:27 -- common/autotest_common.sh@10 -- # set +x 00:06:20.083 [2024-05-12 04:46:27.167018] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:20.083 [2024-05-12 04:46:27.167174] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58451 ] 00:06:20.342 [2024-05-12 04:46:27.333814] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.602 [2024-05-12 04:46:27.486549] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:20.602 [2024-05-12 04:46:27.486785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:21.980 04:46:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:21.980 04:46:28 -- common/autotest_common.sh@852 -- # return 0 00:06:21.980 04:46:28 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58473 00:06:21.980 04:46:28 -- event/cpu_locks.sh@85 -- # waitforlisten 58473 /var/tmp/spdk2.sock 00:06:21.980 04:46:28 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:21.980 04:46:28 -- common/autotest_common.sh@819 -- # '[' -z 58473 ']' 00:06:21.980 04:46:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:21.980 04:46:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:21.980 04:46:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:21.980 04:46:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:21.980 04:46:28 -- common/autotest_common.sh@10 -- # set +x 00:06:21.980 [2024-05-12 04:46:28.855532] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:21.980 [2024-05-12 04:46:28.855918] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58473 ] 00:06:21.980 [2024-05-12 04:46:29.029306] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:21.980 [2024-05-12 04:46:29.029369] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.239 [2024-05-12 04:46:29.364925] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:22.498 [2024-05-12 04:46:29.365169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.401 04:46:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:24.401 04:46:31 -- common/autotest_common.sh@852 -- # return 0 00:06:24.401 04:46:31 -- event/cpu_locks.sh@87 -- # locks_exist 58451 00:06:24.401 04:46:31 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:24.401 04:46:31 -- event/cpu_locks.sh@22 -- # lslocks -p 58451 00:06:24.988 04:46:32 -- event/cpu_locks.sh@89 -- # killprocess 58451 00:06:24.988 04:46:32 -- common/autotest_common.sh@926 -- # '[' -z 58451 ']' 00:06:24.988 04:46:32 -- common/autotest_common.sh@930 -- # kill -0 58451 00:06:24.988 04:46:32 -- common/autotest_common.sh@931 -- # uname 00:06:24.988 04:46:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:24.988 04:46:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58451 00:06:24.988 killing process with pid 58451 00:06:24.988 04:46:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:24.988 04:46:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:24.988 04:46:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58451' 00:06:24.988 04:46:32 -- common/autotest_common.sh@945 -- # kill 58451 00:06:24.988 04:46:32 -- common/autotest_common.sh@950 -- # wait 58451 00:06:29.184 04:46:35 -- event/cpu_locks.sh@90 -- # killprocess 58473 00:06:29.184 04:46:35 -- common/autotest_common.sh@926 -- # '[' -z 58473 ']' 00:06:29.184 04:46:35 -- common/autotest_common.sh@930 -- # kill -0 58473 00:06:29.184 04:46:35 -- common/autotest_common.sh@931 -- # uname 00:06:29.184 04:46:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:29.184 04:46:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58473 00:06:29.184 killing process with pid 58473 00:06:29.184 04:46:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:29.184 04:46:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:29.184 04:46:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58473' 00:06:29.184 04:46:35 -- common/autotest_common.sh@945 -- # kill 58473 00:06:29.184 04:46:35 -- common/autotest_common.sh@950 -- # wait 58473 00:06:30.563 ************************************ 00:06:30.563 END TEST non_locking_app_on_locked_coremask 00:06:30.563 ************************************ 00:06:30.563 00:06:30.563 real 0m10.392s 00:06:30.563 user 0m11.361s 00:06:30.563 sys 0m1.230s 00:06:30.563 04:46:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.563 04:46:37 -- common/autotest_common.sh@10 -- # set +x 00:06:30.563 04:46:37 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:30.563 04:46:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:30.563 04:46:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:30.563 04:46:37 -- common/autotest_common.sh@10 -- # set +x 00:06:30.563 ************************************ 00:06:30.564 START TEST locking_app_on_unlocked_coremask 00:06:30.564 ************************************ 00:06:30.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.564 04:46:37 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:06:30.564 04:46:37 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58608 00:06:30.564 04:46:37 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:30.564 04:46:37 -- event/cpu_locks.sh@99 -- # waitforlisten 58608 /var/tmp/spdk.sock 00:06:30.564 04:46:37 -- common/autotest_common.sh@819 -- # '[' -z 58608 ']' 00:06:30.564 04:46:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.564 04:46:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:30.564 04:46:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.564 04:46:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:30.564 04:46:37 -- common/autotest_common.sh@10 -- # set +x 00:06:30.564 [2024-05-12 04:46:37.595640] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:30.564 [2024-05-12 04:46:37.595888] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58608 ] 00:06:30.823 [2024-05-12 04:46:37.749972] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:30.823 [2024-05-12 04:46:37.750028] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.823 [2024-05-12 04:46:37.901450] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:30.823 [2024-05-12 04:46:37.901670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.202 04:46:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:32.202 04:46:39 -- common/autotest_common.sh@852 -- # return 0 00:06:32.202 04:46:39 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58632 00:06:32.202 04:46:39 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:32.202 04:46:39 -- event/cpu_locks.sh@103 -- # waitforlisten 58632 /var/tmp/spdk2.sock 00:06:32.202 04:46:39 -- common/autotest_common.sh@819 -- # '[' -z 58632 ']' 00:06:32.202 04:46:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:32.202 04:46:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:32.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:32.202 04:46:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:32.202 04:46:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:32.202 04:46:39 -- common/autotest_common.sh@10 -- # set +x 00:06:32.202 [2024-05-12 04:46:39.273055] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:32.202 [2024-05-12 04:46:39.273450] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58632 ] 00:06:32.461 [2024-05-12 04:46:39.449136] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.720 [2024-05-12 04:46:39.742771] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:32.720 [2024-05-12 04:46:39.743007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.631 04:46:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:34.631 04:46:41 -- common/autotest_common.sh@852 -- # return 0 00:06:34.631 04:46:41 -- event/cpu_locks.sh@105 -- # locks_exist 58632 00:06:34.631 04:46:41 -- event/cpu_locks.sh@22 -- # lslocks -p 58632 00:06:34.631 04:46:41 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:35.566 04:46:42 -- event/cpu_locks.sh@107 -- # killprocess 58608 00:06:35.566 04:46:42 -- common/autotest_common.sh@926 -- # '[' -z 58608 ']' 00:06:35.566 04:46:42 -- common/autotest_common.sh@930 -- # kill -0 58608 00:06:35.566 04:46:42 -- common/autotest_common.sh@931 -- # uname 00:06:35.566 04:46:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:35.566 04:46:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58608 00:06:35.566 killing process with pid 58608 00:06:35.566 04:46:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:35.566 04:46:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:35.566 04:46:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58608' 00:06:35.566 04:46:42 -- common/autotest_common.sh@945 -- # kill 58608 00:06:35.566 04:46:42 -- common/autotest_common.sh@950 -- # wait 58608 00:06:39.791 04:46:46 -- event/cpu_locks.sh@108 -- # killprocess 58632 00:06:39.791 04:46:46 -- common/autotest_common.sh@926 -- # '[' -z 58632 ']' 00:06:39.791 04:46:46 -- common/autotest_common.sh@930 -- # kill -0 58632 00:06:39.791 04:46:46 -- common/autotest_common.sh@931 -- # uname 00:06:39.791 04:46:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:39.791 04:46:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58632 00:06:39.791 killing process with pid 58632 00:06:39.791 04:46:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:39.791 04:46:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:39.791 04:46:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58632' 00:06:39.791 04:46:46 -- common/autotest_common.sh@945 -- # kill 58632 00:06:39.791 04:46:46 -- common/autotest_common.sh@950 -- # wait 58632 00:06:41.170 ************************************ 00:06:41.170 END TEST locking_app_on_unlocked_coremask 00:06:41.170 ************************************ 00:06:41.170 00:06:41.170 real 0m10.454s 00:06:41.170 user 0m11.458s 00:06:41.170 sys 0m1.243s 00:06:41.170 04:46:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.170 04:46:47 -- common/autotest_common.sh@10 -- # set +x 00:06:41.170 04:46:47 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:41.170 04:46:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:41.170 04:46:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:41.170 04:46:47 -- common/autotest_common.sh@10 -- # set +x 00:06:41.170 ************************************ 00:06:41.170 START TEST locking_app_on_locked_coremask 00:06:41.170 ************************************ 00:06:41.170 04:46:48 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:06:41.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.170 04:46:48 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58767 00:06:41.170 04:46:48 -- event/cpu_locks.sh@116 -- # waitforlisten 58767 /var/tmp/spdk.sock 00:06:41.170 04:46:48 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:41.170 04:46:48 -- common/autotest_common.sh@819 -- # '[' -z 58767 ']' 00:06:41.170 04:46:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.170 04:46:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:41.170 04:46:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.170 04:46:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:41.170 04:46:48 -- common/autotest_common.sh@10 -- # set +x 00:06:41.170 [2024-05-12 04:46:48.124461] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:41.170 [2024-05-12 04:46:48.124883] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58767 ] 00:06:41.170 [2024-05-12 04:46:48.290127] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.429 [2024-05-12 04:46:48.443902] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:41.429 [2024-05-12 04:46:48.444388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.807 04:46:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:42.807 04:46:49 -- common/autotest_common.sh@852 -- # return 0 00:06:42.807 04:46:49 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:42.807 04:46:49 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58795 00:06:42.808 04:46:49 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58795 /var/tmp/spdk2.sock 00:06:42.808 04:46:49 -- common/autotest_common.sh@640 -- # local es=0 00:06:42.808 04:46:49 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 58795 /var/tmp/spdk2.sock 00:06:42.808 04:46:49 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:42.808 04:46:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:42.808 04:46:49 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:42.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:42.808 04:46:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:42.808 04:46:49 -- common/autotest_common.sh@643 -- # waitforlisten 58795 /var/tmp/spdk2.sock 00:06:42.808 04:46:49 -- common/autotest_common.sh@819 -- # '[' -z 58795 ']' 00:06:42.808 04:46:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:42.808 04:46:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:42.808 04:46:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:42.808 04:46:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:42.808 04:46:49 -- common/autotest_common.sh@10 -- # set +x 00:06:42.808 [2024-05-12 04:46:49.853247] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:42.808 [2024-05-12 04:46:49.853398] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58795 ] 00:06:43.066 [2024-05-12 04:46:50.018881] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58767 has claimed it. 00:06:43.066 [2024-05-12 04:46:50.018969] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:43.633 ERROR: process (pid: 58795) is no longer running 00:06:43.633 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (58795) - No such process 00:06:43.633 04:46:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:43.633 04:46:50 -- common/autotest_common.sh@852 -- # return 1 00:06:43.633 04:46:50 -- common/autotest_common.sh@643 -- # es=1 00:06:43.633 04:46:50 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:43.633 04:46:50 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:43.633 04:46:50 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:43.633 04:46:50 -- event/cpu_locks.sh@122 -- # locks_exist 58767 00:06:43.633 04:46:50 -- event/cpu_locks.sh@22 -- # lslocks -p 58767 00:06:43.633 04:46:50 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:43.893 04:46:50 -- event/cpu_locks.sh@124 -- # killprocess 58767 00:06:43.893 04:46:50 -- common/autotest_common.sh@926 -- # '[' -z 58767 ']' 00:06:43.893 04:46:50 -- common/autotest_common.sh@930 -- # kill -0 58767 00:06:43.893 04:46:50 -- common/autotest_common.sh@931 -- # uname 00:06:43.893 04:46:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:43.893 04:46:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58767 00:06:43.893 killing process with pid 58767 00:06:43.893 04:46:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:43.893 04:46:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:43.893 04:46:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58767' 00:06:43.893 04:46:50 -- common/autotest_common.sh@945 -- # kill 58767 00:06:43.893 04:46:50 -- common/autotest_common.sh@950 -- # wait 58767 00:06:45.802 00:06:45.802 real 0m4.553s 00:06:45.802 user 0m5.078s 00:06:45.802 sys 0m0.677s 00:06:45.802 04:46:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.802 ************************************ 00:06:45.802 END TEST locking_app_on_locked_coremask 00:06:45.802 ************************************ 00:06:45.802 04:46:52 -- common/autotest_common.sh@10 -- # set +x 00:06:45.802 04:46:52 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:45.802 04:46:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:45.802 04:46:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:45.802 04:46:52 -- common/autotest_common.sh@10 -- # set +x 00:06:45.802 ************************************ 00:06:45.802 START TEST locking_overlapped_coremask 00:06:45.802 ************************************ 00:06:45.802 04:46:52 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:06:45.802 04:46:52 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=58855 00:06:45.802 04:46:52 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:45.802 04:46:52 -- event/cpu_locks.sh@133 -- # waitforlisten 58855 /var/tmp/spdk.sock 00:06:45.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.802 04:46:52 -- common/autotest_common.sh@819 -- # '[' -z 58855 ']' 00:06:45.802 04:46:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.802 04:46:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:45.802 04:46:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.802 04:46:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:45.802 04:46:52 -- common/autotest_common.sh@10 -- # set +x 00:06:45.802 [2024-05-12 04:46:52.704854] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:45.802 [2024-05-12 04:46:52.704987] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58855 ] 00:06:45.802 [2024-05-12 04:46:52.859684] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:46.062 [2024-05-12 04:46:53.024933] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:46.062 [2024-05-12 04:46:53.025596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.062 [2024-05-12 04:46:53.025743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.062 [2024-05-12 04:46:53.025750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:47.438 04:46:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:47.438 04:46:54 -- common/autotest_common.sh@852 -- # return 0 00:06:47.438 04:46:54 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58886 00:06:47.438 04:46:54 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:47.438 04:46:54 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58886 /var/tmp/spdk2.sock 00:06:47.438 04:46:54 -- common/autotest_common.sh@640 -- # local es=0 00:06:47.438 04:46:54 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 58886 /var/tmp/spdk2.sock 00:06:47.438 04:46:54 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:47.438 04:46:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:47.438 04:46:54 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:47.438 04:46:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:47.438 04:46:54 -- common/autotest_common.sh@643 -- # waitforlisten 58886 /var/tmp/spdk2.sock 00:06:47.438 04:46:54 -- common/autotest_common.sh@819 -- # '[' -z 58886 ']' 00:06:47.438 04:46:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:47.438 04:46:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:47.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:47.438 04:46:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:47.438 04:46:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:47.438 04:46:54 -- common/autotest_common.sh@10 -- # set +x 00:06:47.438 [2024-05-12 04:46:54.493082] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:47.438 [2024-05-12 04:46:54.493309] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58886 ] 00:06:47.697 [2024-05-12 04:46:54.670175] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58855 has claimed it. 00:06:47.697 [2024-05-12 04:46:54.670301] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:48.265 ERROR: process (pid: 58886) is no longer running 00:06:48.265 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (58886) - No such process 00:06:48.265 04:46:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:48.265 04:46:55 -- common/autotest_common.sh@852 -- # return 1 00:06:48.265 04:46:55 -- common/autotest_common.sh@643 -- # es=1 00:06:48.265 04:46:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:48.265 04:46:55 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:48.265 04:46:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:48.265 04:46:55 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:48.265 04:46:55 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:48.265 04:46:55 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:48.265 04:46:55 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:48.265 04:46:55 -- event/cpu_locks.sh@141 -- # killprocess 58855 00:06:48.265 04:46:55 -- common/autotest_common.sh@926 -- # '[' -z 58855 ']' 00:06:48.265 04:46:55 -- common/autotest_common.sh@930 -- # kill -0 58855 00:06:48.265 04:46:55 -- common/autotest_common.sh@931 -- # uname 00:06:48.265 04:46:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:48.265 04:46:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58855 00:06:48.265 killing process with pid 58855 00:06:48.265 04:46:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:48.265 04:46:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:48.265 04:46:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58855' 00:06:48.265 04:46:55 -- common/autotest_common.sh@945 -- # kill 58855 00:06:48.265 04:46:55 -- common/autotest_common.sh@950 -- # wait 58855 00:06:50.171 00:06:50.171 real 0m4.391s 00:06:50.171 user 0m12.156s 00:06:50.171 sys 0m0.532s 00:06:50.171 04:46:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.171 ************************************ 00:06:50.171 END TEST locking_overlapped_coremask 00:06:50.171 ************************************ 00:06:50.171 04:46:57 -- common/autotest_common.sh@10 -- # set +x 00:06:50.171 04:46:57 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:50.171 04:46:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:50.171 04:46:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:50.171 04:46:57 -- common/autotest_common.sh@10 -- # set +x 00:06:50.171 ************************************ 00:06:50.171 START TEST locking_overlapped_coremask_via_rpc 00:06:50.171 ************************************ 00:06:50.171 04:46:57 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:06:50.171 04:46:57 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58939 00:06:50.171 04:46:57 -- event/cpu_locks.sh@149 -- # waitforlisten 58939 /var/tmp/spdk.sock 00:06:50.171 04:46:57 -- common/autotest_common.sh@819 -- # '[' -z 58939 ']' 00:06:50.171 04:46:57 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:50.171 04:46:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.171 04:46:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:50.171 04:46:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.171 04:46:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:50.171 04:46:57 -- common/autotest_common.sh@10 -- # set +x 00:06:50.171 [2024-05-12 04:46:57.156694] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:50.171 [2024-05-12 04:46:57.156841] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58939 ] 00:06:50.430 [2024-05-12 04:46:57.314952] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:50.430 [2024-05-12 04:46:57.315031] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:50.430 [2024-05-12 04:46:57.474204] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:50.430 [2024-05-12 04:46:57.474537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.430 [2024-05-12 04:46:57.474850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.430 [2024-05-12 04:46:57.474851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:51.848 04:46:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:51.848 04:46:58 -- common/autotest_common.sh@852 -- # return 0 00:06:51.848 04:46:58 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58970 00:06:51.848 04:46:58 -- event/cpu_locks.sh@153 -- # waitforlisten 58970 /var/tmp/spdk2.sock 00:06:51.849 04:46:58 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:51.849 04:46:58 -- common/autotest_common.sh@819 -- # '[' -z 58970 ']' 00:06:51.849 04:46:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:51.849 04:46:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:51.849 04:46:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:51.849 04:46:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:51.849 04:46:58 -- common/autotest_common.sh@10 -- # set +x 00:06:51.849 [2024-05-12 04:46:58.894526] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:51.849 [2024-05-12 04:46:58.894938] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58970 ] 00:06:52.108 [2024-05-12 04:46:59.064010] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:52.108 [2024-05-12 04:46:59.064086] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:52.367 [2024-05-12 04:46:59.417068] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:52.367 [2024-05-12 04:46:59.417609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:52.367 [2024-05-12 04:46:59.417719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.367 [2024-05-12 04:46:59.417733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:54.270 04:47:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:54.270 04:47:01 -- common/autotest_common.sh@852 -- # return 0 00:06:54.270 04:47:01 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:54.270 04:47:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:54.270 04:47:01 -- common/autotest_common.sh@10 -- # set +x 00:06:54.270 04:47:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:54.270 04:47:01 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:54.270 04:47:01 -- common/autotest_common.sh@640 -- # local es=0 00:06:54.270 04:47:01 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:54.270 04:47:01 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:06:54.270 04:47:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:54.270 04:47:01 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:06:54.270 04:47:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:54.271 04:47:01 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:54.271 04:47:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:54.271 04:47:01 -- common/autotest_common.sh@10 -- # set +x 00:06:54.271 [2024-05-12 04:47:01.310479] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58939 has claimed it. 00:06:54.271 request: 00:06:54.271 { 00:06:54.271 "method": "framework_enable_cpumask_locks", 00:06:54.271 "req_id": 1 00:06:54.271 } 00:06:54.271 Got JSON-RPC error response 00:06:54.271 response: 00:06:54.271 { 00:06:54.271 "code": -32603, 00:06:54.271 "message": "Failed to claim CPU core: 2" 00:06:54.271 } 00:06:54.271 04:47:01 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:06:54.271 04:47:01 -- common/autotest_common.sh@643 -- # es=1 00:06:54.271 04:47:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:54.271 04:47:01 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:54.271 04:47:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:54.271 04:47:01 -- event/cpu_locks.sh@158 -- # waitforlisten 58939 /var/tmp/spdk.sock 00:06:54.271 04:47:01 -- common/autotest_common.sh@819 -- # '[' -z 58939 ']' 00:06:54.271 04:47:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.271 04:47:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:54.271 04:47:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.271 04:47:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:54.271 04:47:01 -- common/autotest_common.sh@10 -- # set +x 00:06:54.529 04:47:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:54.529 04:47:01 -- common/autotest_common.sh@852 -- # return 0 00:06:54.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:54.529 04:47:01 -- event/cpu_locks.sh@159 -- # waitforlisten 58970 /var/tmp/spdk2.sock 00:06:54.529 04:47:01 -- common/autotest_common.sh@819 -- # '[' -z 58970 ']' 00:06:54.529 04:47:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:54.529 04:47:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:54.529 04:47:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:54.529 04:47:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:54.529 04:47:01 -- common/autotest_common.sh@10 -- # set +x 00:06:54.788 04:47:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:54.788 04:47:01 -- common/autotest_common.sh@852 -- # return 0 00:06:54.788 04:47:01 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:54.788 04:47:01 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:54.788 04:47:01 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:54.788 04:47:01 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:54.788 00:06:54.788 real 0m4.696s 00:06:54.788 user 0m1.848s 00:06:54.788 sys 0m0.268s 00:06:54.788 04:47:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.788 04:47:01 -- common/autotest_common.sh@10 -- # set +x 00:06:54.788 ************************************ 00:06:54.788 END TEST locking_overlapped_coremask_via_rpc 00:06:54.788 ************************************ 00:06:54.788 04:47:01 -- event/cpu_locks.sh@174 -- # cleanup 00:06:54.788 04:47:01 -- event/cpu_locks.sh@15 -- # [[ -z 58939 ]] 00:06:54.788 04:47:01 -- event/cpu_locks.sh@15 -- # killprocess 58939 00:06:54.788 04:47:01 -- common/autotest_common.sh@926 -- # '[' -z 58939 ']' 00:06:54.788 04:47:01 -- common/autotest_common.sh@930 -- # kill -0 58939 00:06:54.788 04:47:01 -- common/autotest_common.sh@931 -- # uname 00:06:54.788 04:47:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:54.788 04:47:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58939 00:06:54.788 killing process with pid 58939 00:06:54.788 04:47:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:54.788 04:47:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:54.788 04:47:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58939' 00:06:54.788 04:47:01 -- common/autotest_common.sh@945 -- # kill 58939 00:06:54.788 04:47:01 -- common/autotest_common.sh@950 -- # wait 58939 00:06:56.686 04:47:03 -- event/cpu_locks.sh@16 -- # [[ -z 58970 ]] 00:06:56.686 04:47:03 -- event/cpu_locks.sh@16 -- # killprocess 58970 00:06:56.686 04:47:03 -- common/autotest_common.sh@926 -- # '[' -z 58970 ']' 00:06:56.686 04:47:03 -- common/autotest_common.sh@930 -- # kill -0 58970 00:06:56.686 04:47:03 -- common/autotest_common.sh@931 -- # uname 00:06:56.686 04:47:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:56.686 04:47:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58970 00:06:56.686 killing process with pid 58970 00:06:56.686 04:47:03 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:56.686 04:47:03 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:56.686 04:47:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58970' 00:06:56.686 04:47:03 -- common/autotest_common.sh@945 -- # kill 58970 00:06:56.686 04:47:03 -- common/autotest_common.sh@950 -- # wait 58970 00:06:58.591 04:47:05 -- event/cpu_locks.sh@18 -- # rm -f 00:06:58.591 04:47:05 -- event/cpu_locks.sh@1 -- # cleanup 00:06:58.591 04:47:05 -- event/cpu_locks.sh@15 -- # [[ -z 58939 ]] 00:06:58.591 04:47:05 -- event/cpu_locks.sh@15 -- # killprocess 58939 00:06:58.591 Process with pid 58939 is not found 00:06:58.591 Process with pid 58970 is not found 00:06:58.591 04:47:05 -- common/autotest_common.sh@926 -- # '[' -z 58939 ']' 00:06:58.591 04:47:05 -- common/autotest_common.sh@930 -- # kill -0 58939 00:06:58.591 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (58939) - No such process 00:06:58.591 04:47:05 -- common/autotest_common.sh@953 -- # echo 'Process with pid 58939 is not found' 00:06:58.591 04:47:05 -- event/cpu_locks.sh@16 -- # [[ -z 58970 ]] 00:06:58.591 04:47:05 -- event/cpu_locks.sh@16 -- # killprocess 58970 00:06:58.591 04:47:05 -- common/autotest_common.sh@926 -- # '[' -z 58970 ']' 00:06:58.591 04:47:05 -- common/autotest_common.sh@930 -- # kill -0 58970 00:06:58.591 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (58970) - No such process 00:06:58.591 04:47:05 -- common/autotest_common.sh@953 -- # echo 'Process with pid 58970 is not found' 00:06:58.591 04:47:05 -- event/cpu_locks.sh@18 -- # rm -f 00:06:58.591 ************************************ 00:06:58.591 END TEST cpu_locks 00:06:58.591 ************************************ 00:06:58.591 00:06:58.591 real 0m46.413s 00:06:58.591 user 1m22.008s 00:06:58.591 sys 0m5.908s 00:06:58.591 04:47:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.591 04:47:05 -- common/autotest_common.sh@10 -- # set +x 00:06:58.591 ************************************ 00:06:58.591 END TEST event 00:06:58.591 ************************************ 00:06:58.591 00:06:58.591 real 1m16.209s 00:06:58.591 user 2m19.827s 00:06:58.591 sys 0m9.288s 00:06:58.591 04:47:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.591 04:47:05 -- common/autotest_common.sh@10 -- # set +x 00:06:58.591 04:47:05 -- spdk/autotest.sh@188 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:58.591 04:47:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:58.591 04:47:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:58.591 04:47:05 -- common/autotest_common.sh@10 -- # set +x 00:06:58.591 ************************************ 00:06:58.591 START TEST thread 00:06:58.591 ************************************ 00:06:58.591 04:47:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:58.850 * Looking for test storage... 00:06:58.850 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:58.850 04:47:05 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:58.850 04:47:05 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:58.850 04:47:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:58.850 04:47:05 -- common/autotest_common.sh@10 -- # set +x 00:06:58.850 ************************************ 00:06:58.850 START TEST thread_poller_perf 00:06:58.850 ************************************ 00:06:58.850 04:47:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:58.850 [2024-05-12 04:47:05.818887] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:58.850 [2024-05-12 04:47:05.819291] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59147 ] 00:06:59.109 [2024-05-12 04:47:05.993610] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.109 [2024-05-12 04:47:06.213215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.109 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:00.486 ====================================== 00:07:00.486 busy:2211879096 (cyc) 00:07:00.486 total_run_count: 337000 00:07:00.486 tsc_hz: 2200000000 (cyc) 00:07:00.486 ====================================== 00:07:00.486 poller_cost: 6563 (cyc), 2983 (nsec) 00:07:00.486 00:07:00.486 real 0m1.730s 00:07:00.486 user 0m1.508s 00:07:00.486 sys 0m0.110s 00:07:00.486 04:47:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.486 ************************************ 00:07:00.486 END TEST thread_poller_perf 00:07:00.486 ************************************ 00:07:00.486 04:47:07 -- common/autotest_common.sh@10 -- # set +x 00:07:00.486 04:47:07 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:00.486 04:47:07 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:07:00.486 04:47:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:00.486 04:47:07 -- common/autotest_common.sh@10 -- # set +x 00:07:00.486 ************************************ 00:07:00.486 START TEST thread_poller_perf 00:07:00.486 ************************************ 00:07:00.487 04:47:07 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:00.487 [2024-05-12 04:47:07.603421] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:00.487 [2024-05-12 04:47:07.603624] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59184 ] 00:07:00.745 [2024-05-12 04:47:07.774594] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.004 [2024-05-12 04:47:07.928988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.004 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:02.383 ====================================== 00:07:02.383 busy:2204699196 (cyc) 00:07:02.383 total_run_count: 4463000 00:07:02.383 tsc_hz: 2200000000 (cyc) 00:07:02.383 ====================================== 00:07:02.383 poller_cost: 493 (cyc), 224 (nsec) 00:07:02.383 00:07:02.383 real 0m1.681s 00:07:02.383 user 0m1.472s 00:07:02.383 sys 0m0.099s 00:07:02.383 04:47:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.383 ************************************ 00:07:02.383 END TEST thread_poller_perf 00:07:02.383 ************************************ 00:07:02.383 04:47:09 -- common/autotest_common.sh@10 -- # set +x 00:07:02.383 04:47:09 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:02.383 ************************************ 00:07:02.383 END TEST thread 00:07:02.383 ************************************ 00:07:02.383 00:07:02.383 real 0m3.602s 00:07:02.383 user 0m3.052s 00:07:02.383 sys 0m0.322s 00:07:02.383 04:47:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.383 04:47:09 -- common/autotest_common.sh@10 -- # set +x 00:07:02.383 04:47:09 -- spdk/autotest.sh@189 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:02.383 04:47:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:02.383 04:47:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:02.383 04:47:09 -- common/autotest_common.sh@10 -- # set +x 00:07:02.383 ************************************ 00:07:02.383 START TEST accel 00:07:02.383 ************************************ 00:07:02.383 04:47:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:02.383 * Looking for test storage... 00:07:02.383 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:02.383 04:47:09 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:07:02.383 04:47:09 -- accel/accel.sh@74 -- # get_expected_opcs 00:07:02.383 04:47:09 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:02.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.383 04:47:09 -- accel/accel.sh@59 -- # spdk_tgt_pid=59258 00:07:02.383 04:47:09 -- accel/accel.sh@60 -- # waitforlisten 59258 00:07:02.383 04:47:09 -- common/autotest_common.sh@819 -- # '[' -z 59258 ']' 00:07:02.383 04:47:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.383 04:47:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:02.383 04:47:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.383 04:47:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:02.383 04:47:09 -- common/autotest_common.sh@10 -- # set +x 00:07:02.383 04:47:09 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:02.383 04:47:09 -- accel/accel.sh@58 -- # build_accel_config 00:07:02.383 04:47:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:02.383 04:47:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.383 04:47:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.383 04:47:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:02.383 04:47:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:02.383 04:47:09 -- accel/accel.sh@41 -- # local IFS=, 00:07:02.383 04:47:09 -- accel/accel.sh@42 -- # jq -r . 00:07:02.647 [2024-05-12 04:47:09.540554] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:02.647 [2024-05-12 04:47:09.541576] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59258 ] 00:07:02.647 [2024-05-12 04:47:09.714066] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.906 [2024-05-12 04:47:09.874128] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:02.906 [2024-05-12 04:47:09.874520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.284 04:47:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:04.284 04:47:11 -- common/autotest_common.sh@852 -- # return 0 00:07:04.284 04:47:11 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:04.284 04:47:11 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:07:04.284 04:47:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:04.284 04:47:11 -- common/autotest_common.sh@10 -- # set +x 00:07:04.284 04:47:11 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:04.284 04:47:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:04.284 04:47:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:04.284 04:47:11 -- accel/accel.sh@64 -- # IFS== 00:07:04.285 04:47:11 -- accel/accel.sh@64 -- # read -r opc module 00:07:04.285 04:47:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:04.285 04:47:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:04.285 04:47:11 -- accel/accel.sh@64 -- # IFS== 00:07:04.285 04:47:11 -- accel/accel.sh@64 -- # read -r opc module 00:07:04.285 04:47:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:04.285 04:47:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:04.285 04:47:11 -- accel/accel.sh@64 -- # IFS== 00:07:04.285 04:47:11 -- accel/accel.sh@64 -- # read -r opc module 00:07:04.285 04:47:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:04.285 04:47:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:04.285 04:47:11 -- accel/accel.sh@64 -- # IFS== 00:07:04.285 04:47:11 -- accel/accel.sh@64 -- # read -r opc module 00:07:04.285 04:47:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:04.285 04:47:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:04.285 04:47:11 -- accel/accel.sh@64 -- # IFS== 00:07:04.285 04:47:11 -- accel/accel.sh@64 -- # read -r opc module 00:07:04.285 04:47:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:04.285 04:47:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:04.285 04:47:11 -- accel/accel.sh@64 -- # IFS== 00:07:04.285 04:47:11 -- accel/accel.sh@64 -- # read -r opc module 00:07:04.285 04:47:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:04.285 04:47:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:04.285 04:47:11 -- accel/accel.sh@64 -- # IFS== 00:07:04.285 04:47:11 -- accel/accel.sh@64 -- # read -r opc module 00:07:04.285 04:47:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:04.285 04:47:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:04.285 04:47:11 -- accel/accel.sh@64 -- # IFS== 00:07:04.285 04:47:11 -- accel/accel.sh@64 -- # read -r opc module 00:07:04.285 04:47:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:04.285 04:47:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:04.285 04:47:11 -- accel/accel.sh@64 -- # IFS== 00:07:04.285 04:47:11 -- accel/accel.sh@64 -- # read -r opc module 00:07:04.285 04:47:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:04.285 04:47:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:04.285 04:47:11 -- accel/accel.sh@64 -- # IFS== 00:07:04.285 04:47:11 -- accel/accel.sh@64 -- # read -r opc module 00:07:04.285 04:47:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:04.285 04:47:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:04.285 04:47:11 -- accel/accel.sh@64 -- # IFS== 00:07:04.285 04:47:11 -- accel/accel.sh@64 -- # read -r opc module 00:07:04.285 04:47:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:04.285 04:47:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:04.285 04:47:11 -- accel/accel.sh@64 -- # IFS== 00:07:04.285 04:47:11 -- accel/accel.sh@64 -- # read -r opc module 00:07:04.285 04:47:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:04.285 04:47:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:04.285 04:47:11 -- accel/accel.sh@64 -- # IFS== 00:07:04.285 04:47:11 -- accel/accel.sh@64 -- # read -r opc module 00:07:04.285 04:47:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:04.285 04:47:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:04.285 04:47:11 -- accel/accel.sh@64 -- # IFS== 00:07:04.285 04:47:11 -- accel/accel.sh@64 -- # read -r opc module 00:07:04.285 04:47:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:04.285 04:47:11 -- accel/accel.sh@67 -- # killprocess 59258 00:07:04.285 04:47:11 -- common/autotest_common.sh@926 -- # '[' -z 59258 ']' 00:07:04.285 04:47:11 -- common/autotest_common.sh@930 -- # kill -0 59258 00:07:04.285 04:47:11 -- common/autotest_common.sh@931 -- # uname 00:07:04.285 04:47:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:04.285 04:47:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 59258 00:07:04.285 killing process with pid 59258 00:07:04.285 04:47:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:04.285 04:47:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:04.285 04:47:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 59258' 00:07:04.285 04:47:11 -- common/autotest_common.sh@945 -- # kill 59258 00:07:04.285 04:47:11 -- common/autotest_common.sh@950 -- # wait 59258 00:07:06.190 04:47:12 -- accel/accel.sh@68 -- # trap - ERR 00:07:06.190 04:47:12 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:07:06.190 04:47:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:06.190 04:47:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:06.190 04:47:12 -- common/autotest_common.sh@10 -- # set +x 00:07:06.190 04:47:12 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:07:06.190 04:47:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:06.190 04:47:12 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.190 04:47:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.190 04:47:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.190 04:47:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.190 04:47:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.190 04:47:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.190 04:47:12 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.190 04:47:12 -- accel/accel.sh@42 -- # jq -r . 00:07:06.190 04:47:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.190 04:47:13 -- common/autotest_common.sh@10 -- # set +x 00:07:06.190 04:47:13 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:06.190 04:47:13 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:06.190 04:47:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:06.190 04:47:13 -- common/autotest_common.sh@10 -- # set +x 00:07:06.190 ************************************ 00:07:06.190 START TEST accel_missing_filename 00:07:06.190 ************************************ 00:07:06.190 04:47:13 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:07:06.190 04:47:13 -- common/autotest_common.sh@640 -- # local es=0 00:07:06.190 04:47:13 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:06.190 04:47:13 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:07:06.190 04:47:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:06.190 04:47:13 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:07:06.190 04:47:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:06.190 04:47:13 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:07:06.190 04:47:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:06.190 04:47:13 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.190 04:47:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.190 04:47:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.190 04:47:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.190 04:47:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.190 04:47:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.190 04:47:13 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.190 04:47:13 -- accel/accel.sh@42 -- # jq -r . 00:07:06.190 [2024-05-12 04:47:13.145810] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:06.190 [2024-05-12 04:47:13.145965] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59343 ] 00:07:06.190 [2024-05-12 04:47:13.305621] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.449 [2024-05-12 04:47:13.464139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.708 [2024-05-12 04:47:13.616178] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:06.967 [2024-05-12 04:47:14.019547] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:07:07.227 A filename is required. 00:07:07.227 04:47:14 -- common/autotest_common.sh@643 -- # es=234 00:07:07.227 04:47:14 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:07.227 04:47:14 -- common/autotest_common.sh@652 -- # es=106 00:07:07.227 04:47:14 -- common/autotest_common.sh@653 -- # case "$es" in 00:07:07.227 04:47:14 -- common/autotest_common.sh@660 -- # es=1 00:07:07.227 04:47:14 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:07.227 00:07:07.227 real 0m1.224s 00:07:07.227 user 0m1.018s 00:07:07.227 sys 0m0.142s 00:07:07.227 04:47:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.227 04:47:14 -- common/autotest_common.sh@10 -- # set +x 00:07:07.227 ************************************ 00:07:07.227 END TEST accel_missing_filename 00:07:07.227 ************************************ 00:07:07.486 04:47:14 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:07.486 04:47:14 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:07:07.486 04:47:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:07.486 04:47:14 -- common/autotest_common.sh@10 -- # set +x 00:07:07.486 ************************************ 00:07:07.486 START TEST accel_compress_verify 00:07:07.486 ************************************ 00:07:07.486 04:47:14 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:07.486 04:47:14 -- common/autotest_common.sh@640 -- # local es=0 00:07:07.486 04:47:14 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:07.486 04:47:14 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:07:07.486 04:47:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:07.486 04:47:14 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:07:07.486 04:47:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:07.486 04:47:14 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:07.486 04:47:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:07.486 04:47:14 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.486 04:47:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:07.486 04:47:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.486 04:47:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.486 04:47:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:07.486 04:47:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:07.486 04:47:14 -- accel/accel.sh@41 -- # local IFS=, 00:07:07.486 04:47:14 -- accel/accel.sh@42 -- # jq -r . 00:07:07.486 [2024-05-12 04:47:14.426300] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:07.486 [2024-05-12 04:47:14.427168] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59374 ] 00:07:07.486 [2024-05-12 04:47:14.594171] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.745 [2024-05-12 04:47:14.752848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.004 [2024-05-12 04:47:14.901941] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:08.263 [2024-05-12 04:47:15.295148] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:07:08.522 00:07:08.522 Compression does not support the verify option, aborting. 00:07:08.522 04:47:15 -- common/autotest_common.sh@643 -- # es=161 00:07:08.522 04:47:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:08.522 04:47:15 -- common/autotest_common.sh@652 -- # es=33 00:07:08.522 04:47:15 -- common/autotest_common.sh@653 -- # case "$es" in 00:07:08.522 04:47:15 -- common/autotest_common.sh@660 -- # es=1 00:07:08.522 04:47:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:08.522 00:07:08.522 real 0m1.214s 00:07:08.522 user 0m1.026s 00:07:08.522 sys 0m0.131s 00:07:08.522 04:47:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.522 ************************************ 00:07:08.522 END TEST accel_compress_verify 00:07:08.522 04:47:15 -- common/autotest_common.sh@10 -- # set +x 00:07:08.522 ************************************ 00:07:08.522 04:47:15 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:08.522 04:47:15 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:08.522 04:47:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:08.522 04:47:15 -- common/autotest_common.sh@10 -- # set +x 00:07:08.522 ************************************ 00:07:08.522 START TEST accel_wrong_workload 00:07:08.522 ************************************ 00:07:08.522 04:47:15 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:07:08.522 04:47:15 -- common/autotest_common.sh@640 -- # local es=0 00:07:08.522 04:47:15 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:08.522 04:47:15 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:07:08.522 04:47:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:08.522 04:47:15 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:07:08.522 04:47:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:08.522 04:47:15 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:07:08.522 04:47:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:08.522 04:47:15 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.522 04:47:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:08.522 04:47:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.522 04:47:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.522 04:47:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:08.522 04:47:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:08.522 04:47:15 -- accel/accel.sh@41 -- # local IFS=, 00:07:08.522 04:47:15 -- accel/accel.sh@42 -- # jq -r . 00:07:08.781 Unsupported workload type: foobar 00:07:08.781 [2024-05-12 04:47:15.693601] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:08.781 accel_perf options: 00:07:08.781 [-h help message] 00:07:08.781 [-q queue depth per core] 00:07:08.781 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:08.781 [-T number of threads per core 00:07:08.781 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:08.781 [-t time in seconds] 00:07:08.781 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:08.781 [ dif_verify, , dif_generate, dif_generate_copy 00:07:08.781 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:08.781 [-l for compress/decompress workloads, name of uncompressed input file 00:07:08.781 [-S for crc32c workload, use this seed value (default 0) 00:07:08.781 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:08.781 [-f for fill workload, use this BYTE value (default 255) 00:07:08.781 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:08.781 [-y verify result if this switch is on] 00:07:08.781 [-a tasks to allocate per core (default: same value as -q)] 00:07:08.781 Can be used to spread operations across a wider range of memory. 00:07:08.781 04:47:15 -- common/autotest_common.sh@643 -- # es=1 00:07:08.781 04:47:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:08.781 04:47:15 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:08.781 04:47:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:08.781 00:07:08.781 real 0m0.079s 00:07:08.781 user 0m0.077s 00:07:08.781 sys 0m0.044s 00:07:08.781 04:47:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.781 ************************************ 00:07:08.781 04:47:15 -- common/autotest_common.sh@10 -- # set +x 00:07:08.781 END TEST accel_wrong_workload 00:07:08.781 ************************************ 00:07:08.781 04:47:15 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:08.781 04:47:15 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:07:08.781 04:47:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:08.781 04:47:15 -- common/autotest_common.sh@10 -- # set +x 00:07:08.781 ************************************ 00:07:08.781 START TEST accel_negative_buffers 00:07:08.781 ************************************ 00:07:08.781 04:47:15 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:08.781 04:47:15 -- common/autotest_common.sh@640 -- # local es=0 00:07:08.782 04:47:15 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:08.782 04:47:15 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:07:08.782 04:47:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:08.782 04:47:15 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:07:08.782 04:47:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:08.782 04:47:15 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:07:08.782 04:47:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:08.782 04:47:15 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.782 04:47:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:08.782 04:47:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.782 04:47:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.782 04:47:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:08.782 04:47:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:08.782 04:47:15 -- accel/accel.sh@41 -- # local IFS=, 00:07:08.782 04:47:15 -- accel/accel.sh@42 -- # jq -r . 00:07:08.782 -x option must be non-negative. 00:07:08.782 [2024-05-12 04:47:15.818302] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:08.782 accel_perf options: 00:07:08.782 [-h help message] 00:07:08.782 [-q queue depth per core] 00:07:08.782 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:08.782 [-T number of threads per core 00:07:08.782 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:08.782 [-t time in seconds] 00:07:08.782 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:08.782 [ dif_verify, , dif_generate, dif_generate_copy 00:07:08.782 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:08.782 [-l for compress/decompress workloads, name of uncompressed input file 00:07:08.782 [-S for crc32c workload, use this seed value (default 0) 00:07:08.782 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:08.782 [-f for fill workload, use this BYTE value (default 255) 00:07:08.782 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:08.782 [-y verify result if this switch is on] 00:07:08.782 [-a tasks to allocate per core (default: same value as -q)] 00:07:08.782 Can be used to spread operations across a wider range of memory. 00:07:08.782 04:47:15 -- common/autotest_common.sh@643 -- # es=1 00:07:08.782 04:47:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:08.782 ************************************ 00:07:08.782 END TEST accel_negative_buffers 00:07:08.782 ************************************ 00:07:08.782 04:47:15 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:08.782 04:47:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:08.782 00:07:08.782 real 0m0.081s 00:07:08.782 user 0m0.086s 00:07:08.782 sys 0m0.038s 00:07:08.782 04:47:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.782 04:47:15 -- common/autotest_common.sh@10 -- # set +x 00:07:08.782 04:47:15 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:08.782 04:47:15 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:08.782 04:47:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:08.782 04:47:15 -- common/autotest_common.sh@10 -- # set +x 00:07:08.782 ************************************ 00:07:08.782 START TEST accel_crc32c 00:07:08.782 ************************************ 00:07:08.782 04:47:15 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:08.782 04:47:15 -- accel/accel.sh@16 -- # local accel_opc 00:07:08.782 04:47:15 -- accel/accel.sh@17 -- # local accel_module 00:07:08.782 04:47:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:08.782 04:47:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:08.782 04:47:15 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.782 04:47:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:08.782 04:47:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.782 04:47:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.782 04:47:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:08.782 04:47:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:08.782 04:47:15 -- accel/accel.sh@41 -- # local IFS=, 00:07:08.782 04:47:15 -- accel/accel.sh@42 -- # jq -r . 00:07:09.042 [2024-05-12 04:47:15.945696] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:09.042 [2024-05-12 04:47:15.945843] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59450 ] 00:07:09.042 [2024-05-12 04:47:16.113205] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.302 [2024-05-12 04:47:16.264228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.209 04:47:18 -- accel/accel.sh@18 -- # out=' 00:07:11.209 SPDK Configuration: 00:07:11.209 Core mask: 0x1 00:07:11.209 00:07:11.209 Accel Perf Configuration: 00:07:11.209 Workload Type: crc32c 00:07:11.209 CRC-32C seed: 32 00:07:11.209 Transfer size: 4096 bytes 00:07:11.209 Vector count 1 00:07:11.209 Module: software 00:07:11.209 Queue depth: 32 00:07:11.209 Allocate depth: 32 00:07:11.209 # threads/core: 1 00:07:11.209 Run time: 1 seconds 00:07:11.209 Verify: Yes 00:07:11.209 00:07:11.209 Running for 1 seconds... 00:07:11.209 00:07:11.209 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:11.209 ------------------------------------------------------------------------------------ 00:07:11.209 0,0 466048/s 1820 MiB/s 0 0 00:07:11.209 ==================================================================================== 00:07:11.209 Total 466048/s 1820 MiB/s 0 0' 00:07:11.209 04:47:18 -- accel/accel.sh@20 -- # IFS=: 00:07:11.209 04:47:18 -- accel/accel.sh@20 -- # read -r var val 00:07:11.209 04:47:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:11.209 04:47:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:11.209 04:47:18 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.209 04:47:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:11.209 04:47:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.209 04:47:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.209 04:47:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:11.209 04:47:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:11.209 04:47:18 -- accel/accel.sh@41 -- # local IFS=, 00:07:11.209 04:47:18 -- accel/accel.sh@42 -- # jq -r . 00:07:11.209 [2024-05-12 04:47:18.191205] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:11.210 [2024-05-12 04:47:18.191427] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59476 ] 00:07:11.469 [2024-05-12 04:47:18.358378] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.469 [2024-05-12 04:47:18.505777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.728 04:47:18 -- accel/accel.sh@21 -- # val= 00:07:11.728 04:47:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.728 04:47:18 -- accel/accel.sh@20 -- # IFS=: 00:07:11.728 04:47:18 -- accel/accel.sh@20 -- # read -r var val 00:07:11.728 04:47:18 -- accel/accel.sh@21 -- # val= 00:07:11.728 04:47:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.728 04:47:18 -- accel/accel.sh@20 -- # IFS=: 00:07:11.728 04:47:18 -- accel/accel.sh@20 -- # read -r var val 00:07:11.728 04:47:18 -- accel/accel.sh@21 -- # val=0x1 00:07:11.728 04:47:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.728 04:47:18 -- accel/accel.sh@20 -- # IFS=: 00:07:11.728 04:47:18 -- accel/accel.sh@20 -- # read -r var val 00:07:11.728 04:47:18 -- accel/accel.sh@21 -- # val= 00:07:11.728 04:47:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.728 04:47:18 -- accel/accel.sh@20 -- # IFS=: 00:07:11.728 04:47:18 -- accel/accel.sh@20 -- # read -r var val 00:07:11.728 04:47:18 -- accel/accel.sh@21 -- # val= 00:07:11.728 04:47:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.728 04:47:18 -- accel/accel.sh@20 -- # IFS=: 00:07:11.728 04:47:18 -- accel/accel.sh@20 -- # read -r var val 00:07:11.728 04:47:18 -- accel/accel.sh@21 -- # val=crc32c 00:07:11.728 04:47:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.728 04:47:18 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:07:11.728 04:47:18 -- accel/accel.sh@20 -- # IFS=: 00:07:11.728 04:47:18 -- accel/accel.sh@20 -- # read -r var val 00:07:11.728 04:47:18 -- accel/accel.sh@21 -- # val=32 00:07:11.728 04:47:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.729 04:47:18 -- accel/accel.sh@20 -- # IFS=: 00:07:11.729 04:47:18 -- accel/accel.sh@20 -- # read -r var val 00:07:11.729 04:47:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:11.729 04:47:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.729 04:47:18 -- accel/accel.sh@20 -- # IFS=: 00:07:11.729 04:47:18 -- accel/accel.sh@20 -- # read -r var val 00:07:11.729 04:47:18 -- accel/accel.sh@21 -- # val= 00:07:11.729 04:47:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.729 04:47:18 -- accel/accel.sh@20 -- # IFS=: 00:07:11.729 04:47:18 -- accel/accel.sh@20 -- # read -r var val 00:07:11.729 04:47:18 -- accel/accel.sh@21 -- # val=software 00:07:11.729 04:47:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.729 04:47:18 -- accel/accel.sh@23 -- # accel_module=software 00:07:11.729 04:47:18 -- accel/accel.sh@20 -- # IFS=: 00:07:11.729 04:47:18 -- accel/accel.sh@20 -- # read -r var val 00:07:11.729 04:47:18 -- accel/accel.sh@21 -- # val=32 00:07:11.729 04:47:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.729 04:47:18 -- accel/accel.sh@20 -- # IFS=: 00:07:11.729 04:47:18 -- accel/accel.sh@20 -- # read -r var val 00:07:11.729 04:47:18 -- accel/accel.sh@21 -- # val=32 00:07:11.729 04:47:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.729 04:47:18 -- accel/accel.sh@20 -- # IFS=: 00:07:11.729 04:47:18 -- accel/accel.sh@20 -- # read -r var val 00:07:11.729 04:47:18 -- accel/accel.sh@21 -- # val=1 00:07:11.729 04:47:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.729 04:47:18 -- accel/accel.sh@20 -- # IFS=: 00:07:11.729 04:47:18 -- accel/accel.sh@20 -- # read -r var val 00:07:11.729 04:47:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:11.729 04:47:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.729 04:47:18 -- accel/accel.sh@20 -- # IFS=: 00:07:11.729 04:47:18 -- accel/accel.sh@20 -- # read -r var val 00:07:11.729 04:47:18 -- accel/accel.sh@21 -- # val=Yes 00:07:11.729 04:47:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.729 04:47:18 -- accel/accel.sh@20 -- # IFS=: 00:07:11.729 04:47:18 -- accel/accel.sh@20 -- # read -r var val 00:07:11.729 04:47:18 -- accel/accel.sh@21 -- # val= 00:07:11.729 04:47:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.729 04:47:18 -- accel/accel.sh@20 -- # IFS=: 00:07:11.729 04:47:18 -- accel/accel.sh@20 -- # read -r var val 00:07:11.729 04:47:18 -- accel/accel.sh@21 -- # val= 00:07:11.729 04:47:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.729 04:47:18 -- accel/accel.sh@20 -- # IFS=: 00:07:11.729 04:47:18 -- accel/accel.sh@20 -- # read -r var val 00:07:13.633 04:47:20 -- accel/accel.sh@21 -- # val= 00:07:13.633 04:47:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.633 04:47:20 -- accel/accel.sh@20 -- # IFS=: 00:07:13.633 04:47:20 -- accel/accel.sh@20 -- # read -r var val 00:07:13.633 04:47:20 -- accel/accel.sh@21 -- # val= 00:07:13.633 04:47:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.633 04:47:20 -- accel/accel.sh@20 -- # IFS=: 00:07:13.633 04:47:20 -- accel/accel.sh@20 -- # read -r var val 00:07:13.633 04:47:20 -- accel/accel.sh@21 -- # val= 00:07:13.633 04:47:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.633 04:47:20 -- accel/accel.sh@20 -- # IFS=: 00:07:13.633 04:47:20 -- accel/accel.sh@20 -- # read -r var val 00:07:13.633 04:47:20 -- accel/accel.sh@21 -- # val= 00:07:13.633 04:47:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.633 04:47:20 -- accel/accel.sh@20 -- # IFS=: 00:07:13.633 04:47:20 -- accel/accel.sh@20 -- # read -r var val 00:07:13.633 04:47:20 -- accel/accel.sh@21 -- # val= 00:07:13.633 04:47:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.633 04:47:20 -- accel/accel.sh@20 -- # IFS=: 00:07:13.633 04:47:20 -- accel/accel.sh@20 -- # read -r var val 00:07:13.633 04:47:20 -- accel/accel.sh@21 -- # val= 00:07:13.633 04:47:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.633 04:47:20 -- accel/accel.sh@20 -- # IFS=: 00:07:13.633 04:47:20 -- accel/accel.sh@20 -- # read -r var val 00:07:13.633 04:47:20 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:13.633 04:47:20 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:13.633 ************************************ 00:07:13.633 END TEST accel_crc32c 00:07:13.633 ************************************ 00:07:13.633 04:47:20 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.633 00:07:13.633 real 0m4.454s 00:07:13.633 user 0m3.955s 00:07:13.633 sys 0m0.293s 00:07:13.633 04:47:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.633 04:47:20 -- common/autotest_common.sh@10 -- # set +x 00:07:13.633 04:47:20 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:13.633 04:47:20 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:13.633 04:47:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:13.633 04:47:20 -- common/autotest_common.sh@10 -- # set +x 00:07:13.633 ************************************ 00:07:13.633 START TEST accel_crc32c_C2 00:07:13.633 ************************************ 00:07:13.633 04:47:20 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:13.633 04:47:20 -- accel/accel.sh@16 -- # local accel_opc 00:07:13.633 04:47:20 -- accel/accel.sh@17 -- # local accel_module 00:07:13.633 04:47:20 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:13.633 04:47:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:13.633 04:47:20 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.633 04:47:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.633 04:47:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.633 04:47:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.633 04:47:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.633 04:47:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.633 04:47:20 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.633 04:47:20 -- accel/accel.sh@42 -- # jq -r . 00:07:13.633 [2024-05-12 04:47:20.455849] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:13.633 [2024-05-12 04:47:20.456061] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59522 ] 00:07:13.633 [2024-05-12 04:47:20.626905] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.905 [2024-05-12 04:47:20.780737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.832 04:47:22 -- accel/accel.sh@18 -- # out=' 00:07:15.832 SPDK Configuration: 00:07:15.832 Core mask: 0x1 00:07:15.832 00:07:15.832 Accel Perf Configuration: 00:07:15.832 Workload Type: crc32c 00:07:15.832 CRC-32C seed: 0 00:07:15.832 Transfer size: 4096 bytes 00:07:15.832 Vector count 2 00:07:15.832 Module: software 00:07:15.832 Queue depth: 32 00:07:15.832 Allocate depth: 32 00:07:15.832 # threads/core: 1 00:07:15.832 Run time: 1 seconds 00:07:15.832 Verify: Yes 00:07:15.832 00:07:15.832 Running for 1 seconds... 00:07:15.832 00:07:15.832 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:15.832 ------------------------------------------------------------------------------------ 00:07:15.832 0,0 361728/s 2826 MiB/s 0 0 00:07:15.832 ==================================================================================== 00:07:15.832 Total 361728/s 1413 MiB/s 0 0' 00:07:15.832 04:47:22 -- accel/accel.sh@20 -- # IFS=: 00:07:15.832 04:47:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:15.832 04:47:22 -- accel/accel.sh@20 -- # read -r var val 00:07:15.832 04:47:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:15.832 04:47:22 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.832 04:47:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.832 04:47:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.832 04:47:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.832 04:47:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.832 04:47:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.832 04:47:22 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.832 04:47:22 -- accel/accel.sh@42 -- # jq -r . 00:07:15.832 [2024-05-12 04:47:22.679105] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:15.832 [2024-05-12 04:47:22.679796] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59548 ] 00:07:15.832 [2024-05-12 04:47:22.849169] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.091 [2024-05-12 04:47:22.998176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.092 04:47:23 -- accel/accel.sh@21 -- # val= 00:07:16.092 04:47:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.092 04:47:23 -- accel/accel.sh@20 -- # IFS=: 00:07:16.092 04:47:23 -- accel/accel.sh@20 -- # read -r var val 00:07:16.092 04:47:23 -- accel/accel.sh@21 -- # val= 00:07:16.092 04:47:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.092 04:47:23 -- accel/accel.sh@20 -- # IFS=: 00:07:16.092 04:47:23 -- accel/accel.sh@20 -- # read -r var val 00:07:16.092 04:47:23 -- accel/accel.sh@21 -- # val=0x1 00:07:16.092 04:47:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.092 04:47:23 -- accel/accel.sh@20 -- # IFS=: 00:07:16.092 04:47:23 -- accel/accel.sh@20 -- # read -r var val 00:07:16.092 04:47:23 -- accel/accel.sh@21 -- # val= 00:07:16.092 04:47:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.092 04:47:23 -- accel/accel.sh@20 -- # IFS=: 00:07:16.092 04:47:23 -- accel/accel.sh@20 -- # read -r var val 00:07:16.092 04:47:23 -- accel/accel.sh@21 -- # val= 00:07:16.092 04:47:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.092 04:47:23 -- accel/accel.sh@20 -- # IFS=: 00:07:16.092 04:47:23 -- accel/accel.sh@20 -- # read -r var val 00:07:16.092 04:47:23 -- accel/accel.sh@21 -- # val=crc32c 00:07:16.092 04:47:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.092 04:47:23 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:07:16.092 04:47:23 -- accel/accel.sh@20 -- # IFS=: 00:07:16.092 04:47:23 -- accel/accel.sh@20 -- # read -r var val 00:07:16.092 04:47:23 -- accel/accel.sh@21 -- # val=0 00:07:16.092 04:47:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.092 04:47:23 -- accel/accel.sh@20 -- # IFS=: 00:07:16.092 04:47:23 -- accel/accel.sh@20 -- # read -r var val 00:07:16.092 04:47:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:16.092 04:47:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.092 04:47:23 -- accel/accel.sh@20 -- # IFS=: 00:07:16.092 04:47:23 -- accel/accel.sh@20 -- # read -r var val 00:07:16.092 04:47:23 -- accel/accel.sh@21 -- # val= 00:07:16.092 04:47:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.092 04:47:23 -- accel/accel.sh@20 -- # IFS=: 00:07:16.092 04:47:23 -- accel/accel.sh@20 -- # read -r var val 00:07:16.092 04:47:23 -- accel/accel.sh@21 -- # val=software 00:07:16.092 04:47:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.092 04:47:23 -- accel/accel.sh@23 -- # accel_module=software 00:07:16.092 04:47:23 -- accel/accel.sh@20 -- # IFS=: 00:07:16.092 04:47:23 -- accel/accel.sh@20 -- # read -r var val 00:07:16.092 04:47:23 -- accel/accel.sh@21 -- # val=32 00:07:16.092 04:47:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.092 04:47:23 -- accel/accel.sh@20 -- # IFS=: 00:07:16.092 04:47:23 -- accel/accel.sh@20 -- # read -r var val 00:07:16.092 04:47:23 -- accel/accel.sh@21 -- # val=32 00:07:16.092 04:47:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.092 04:47:23 -- accel/accel.sh@20 -- # IFS=: 00:07:16.092 04:47:23 -- accel/accel.sh@20 -- # read -r var val 00:07:16.092 04:47:23 -- accel/accel.sh@21 -- # val=1 00:07:16.092 04:47:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.092 04:47:23 -- accel/accel.sh@20 -- # IFS=: 00:07:16.092 04:47:23 -- accel/accel.sh@20 -- # read -r var val 00:07:16.092 04:47:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:16.092 04:47:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.092 04:47:23 -- accel/accel.sh@20 -- # IFS=: 00:07:16.092 04:47:23 -- accel/accel.sh@20 -- # read -r var val 00:07:16.092 04:47:23 -- accel/accel.sh@21 -- # val=Yes 00:07:16.092 04:47:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.092 04:47:23 -- accel/accel.sh@20 -- # IFS=: 00:07:16.092 04:47:23 -- accel/accel.sh@20 -- # read -r var val 00:07:16.092 04:47:23 -- accel/accel.sh@21 -- # val= 00:07:16.092 04:47:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.092 04:47:23 -- accel/accel.sh@20 -- # IFS=: 00:07:16.092 04:47:23 -- accel/accel.sh@20 -- # read -r var val 00:07:16.092 04:47:23 -- accel/accel.sh@21 -- # val= 00:07:16.092 04:47:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.092 04:47:23 -- accel/accel.sh@20 -- # IFS=: 00:07:16.092 04:47:23 -- accel/accel.sh@20 -- # read -r var val 00:07:17.998 04:47:24 -- accel/accel.sh@21 -- # val= 00:07:17.998 04:47:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.998 04:47:24 -- accel/accel.sh@20 -- # IFS=: 00:07:17.998 04:47:24 -- accel/accel.sh@20 -- # read -r var val 00:07:17.998 04:47:24 -- accel/accel.sh@21 -- # val= 00:07:17.998 04:47:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.998 04:47:24 -- accel/accel.sh@20 -- # IFS=: 00:07:17.998 04:47:24 -- accel/accel.sh@20 -- # read -r var val 00:07:17.998 04:47:24 -- accel/accel.sh@21 -- # val= 00:07:17.998 04:47:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.998 04:47:24 -- accel/accel.sh@20 -- # IFS=: 00:07:17.998 04:47:24 -- accel/accel.sh@20 -- # read -r var val 00:07:17.998 04:47:24 -- accel/accel.sh@21 -- # val= 00:07:17.998 04:47:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.998 04:47:24 -- accel/accel.sh@20 -- # IFS=: 00:07:17.998 04:47:24 -- accel/accel.sh@20 -- # read -r var val 00:07:17.998 04:47:24 -- accel/accel.sh@21 -- # val= 00:07:17.998 04:47:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.998 04:47:24 -- accel/accel.sh@20 -- # IFS=: 00:07:17.998 04:47:24 -- accel/accel.sh@20 -- # read -r var val 00:07:17.998 04:47:24 -- accel/accel.sh@21 -- # val= 00:07:17.998 04:47:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.998 04:47:24 -- accel/accel.sh@20 -- # IFS=: 00:07:17.998 04:47:24 -- accel/accel.sh@20 -- # read -r var val 00:07:17.998 04:47:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:17.998 04:47:24 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:17.998 04:47:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.998 00:07:17.998 real 0m4.484s 00:07:17.998 user 0m3.977s 00:07:17.998 sys 0m0.298s 00:07:17.998 04:47:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.998 04:47:24 -- common/autotest_common.sh@10 -- # set +x 00:07:17.998 ************************************ 00:07:17.998 END TEST accel_crc32c_C2 00:07:17.998 ************************************ 00:07:17.998 04:47:24 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:17.998 04:47:24 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:17.998 04:47:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:17.998 04:47:24 -- common/autotest_common.sh@10 -- # set +x 00:07:17.998 ************************************ 00:07:17.998 START TEST accel_copy 00:07:17.998 ************************************ 00:07:17.998 04:47:24 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:07:17.998 04:47:24 -- accel/accel.sh@16 -- # local accel_opc 00:07:17.998 04:47:24 -- accel/accel.sh@17 -- # local accel_module 00:07:17.998 04:47:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:07:17.998 04:47:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:17.998 04:47:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.998 04:47:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:17.998 04:47:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.998 04:47:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.998 04:47:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:17.999 04:47:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:17.999 04:47:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:17.999 04:47:24 -- accel/accel.sh@42 -- # jq -r . 00:07:17.999 [2024-05-12 04:47:25.001126] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:17.999 [2024-05-12 04:47:25.001332] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59595 ] 00:07:18.257 [2024-05-12 04:47:25.180188] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.516 [2024-05-12 04:47:25.398266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.419 04:47:27 -- accel/accel.sh@18 -- # out=' 00:07:20.419 SPDK Configuration: 00:07:20.419 Core mask: 0x1 00:07:20.419 00:07:20.419 Accel Perf Configuration: 00:07:20.419 Workload Type: copy 00:07:20.419 Transfer size: 4096 bytes 00:07:20.419 Vector count 1 00:07:20.419 Module: software 00:07:20.419 Queue depth: 32 00:07:20.419 Allocate depth: 32 00:07:20.419 # threads/core: 1 00:07:20.419 Run time: 1 seconds 00:07:20.419 Verify: Yes 00:07:20.419 00:07:20.419 Running for 1 seconds... 00:07:20.419 00:07:20.419 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:20.419 ------------------------------------------------------------------------------------ 00:07:20.419 0,0 239840/s 936 MiB/s 0 0 00:07:20.419 ==================================================================================== 00:07:20.419 Total 239840/s 936 MiB/s 0 0' 00:07:20.419 04:47:27 -- accel/accel.sh@20 -- # IFS=: 00:07:20.419 04:47:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:20.419 04:47:27 -- accel/accel.sh@20 -- # read -r var val 00:07:20.419 04:47:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:20.419 04:47:27 -- accel/accel.sh@12 -- # build_accel_config 00:07:20.419 04:47:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:20.419 04:47:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.419 04:47:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.419 04:47:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:20.419 04:47:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:20.419 04:47:27 -- accel/accel.sh@41 -- # local IFS=, 00:07:20.419 04:47:27 -- accel/accel.sh@42 -- # jq -r . 00:07:20.419 [2024-05-12 04:47:27.385283] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:20.419 [2024-05-12 04:47:27.385453] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59621 ] 00:07:20.678 [2024-05-12 04:47:27.554326] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.678 [2024-05-12 04:47:27.713053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.937 04:47:27 -- accel/accel.sh@21 -- # val= 00:07:20.937 04:47:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.937 04:47:27 -- accel/accel.sh@20 -- # IFS=: 00:07:20.937 04:47:27 -- accel/accel.sh@20 -- # read -r var val 00:07:20.937 04:47:27 -- accel/accel.sh@21 -- # val= 00:07:20.937 04:47:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.937 04:47:27 -- accel/accel.sh@20 -- # IFS=: 00:07:20.937 04:47:27 -- accel/accel.sh@20 -- # read -r var val 00:07:20.937 04:47:27 -- accel/accel.sh@21 -- # val=0x1 00:07:20.937 04:47:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.937 04:47:27 -- accel/accel.sh@20 -- # IFS=: 00:07:20.937 04:47:27 -- accel/accel.sh@20 -- # read -r var val 00:07:20.937 04:47:27 -- accel/accel.sh@21 -- # val= 00:07:20.937 04:47:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.937 04:47:27 -- accel/accel.sh@20 -- # IFS=: 00:07:20.937 04:47:27 -- accel/accel.sh@20 -- # read -r var val 00:07:20.937 04:47:27 -- accel/accel.sh@21 -- # val= 00:07:20.937 04:47:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.937 04:47:27 -- accel/accel.sh@20 -- # IFS=: 00:07:20.937 04:47:27 -- accel/accel.sh@20 -- # read -r var val 00:07:20.937 04:47:27 -- accel/accel.sh@21 -- # val=copy 00:07:20.937 04:47:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.937 04:47:27 -- accel/accel.sh@24 -- # accel_opc=copy 00:07:20.937 04:47:27 -- accel/accel.sh@20 -- # IFS=: 00:07:20.937 04:47:27 -- accel/accel.sh@20 -- # read -r var val 00:07:20.937 04:47:27 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:20.937 04:47:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.937 04:47:27 -- accel/accel.sh@20 -- # IFS=: 00:07:20.937 04:47:27 -- accel/accel.sh@20 -- # read -r var val 00:07:20.937 04:47:27 -- accel/accel.sh@21 -- # val= 00:07:20.937 04:47:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.937 04:47:27 -- accel/accel.sh@20 -- # IFS=: 00:07:20.937 04:47:27 -- accel/accel.sh@20 -- # read -r var val 00:07:20.937 04:47:27 -- accel/accel.sh@21 -- # val=software 00:07:20.937 04:47:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.937 04:47:27 -- accel/accel.sh@23 -- # accel_module=software 00:07:20.937 04:47:27 -- accel/accel.sh@20 -- # IFS=: 00:07:20.937 04:47:27 -- accel/accel.sh@20 -- # read -r var val 00:07:20.937 04:47:27 -- accel/accel.sh@21 -- # val=32 00:07:20.937 04:47:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.937 04:47:27 -- accel/accel.sh@20 -- # IFS=: 00:07:20.937 04:47:27 -- accel/accel.sh@20 -- # read -r var val 00:07:20.937 04:47:27 -- accel/accel.sh@21 -- # val=32 00:07:20.937 04:47:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.937 04:47:27 -- accel/accel.sh@20 -- # IFS=: 00:07:20.937 04:47:27 -- accel/accel.sh@20 -- # read -r var val 00:07:20.937 04:47:27 -- accel/accel.sh@21 -- # val=1 00:07:20.937 04:47:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.937 04:47:27 -- accel/accel.sh@20 -- # IFS=: 00:07:20.937 04:47:27 -- accel/accel.sh@20 -- # read -r var val 00:07:20.937 04:47:27 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:20.937 04:47:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.937 04:47:27 -- accel/accel.sh@20 -- # IFS=: 00:07:20.937 04:47:27 -- accel/accel.sh@20 -- # read -r var val 00:07:20.937 04:47:27 -- accel/accel.sh@21 -- # val=Yes 00:07:20.937 04:47:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.937 04:47:27 -- accel/accel.sh@20 -- # IFS=: 00:07:20.937 04:47:27 -- accel/accel.sh@20 -- # read -r var val 00:07:20.937 04:47:27 -- accel/accel.sh@21 -- # val= 00:07:20.937 04:47:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.937 04:47:27 -- accel/accel.sh@20 -- # IFS=: 00:07:20.937 04:47:27 -- accel/accel.sh@20 -- # read -r var val 00:07:20.937 04:47:27 -- accel/accel.sh@21 -- # val= 00:07:20.937 04:47:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.937 04:47:27 -- accel/accel.sh@20 -- # IFS=: 00:07:20.937 04:47:27 -- accel/accel.sh@20 -- # read -r var val 00:07:22.843 04:47:29 -- accel/accel.sh@21 -- # val= 00:07:22.843 04:47:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.843 04:47:29 -- accel/accel.sh@20 -- # IFS=: 00:07:22.843 04:47:29 -- accel/accel.sh@20 -- # read -r var val 00:07:22.843 04:47:29 -- accel/accel.sh@21 -- # val= 00:07:22.843 04:47:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.843 04:47:29 -- accel/accel.sh@20 -- # IFS=: 00:07:22.843 04:47:29 -- accel/accel.sh@20 -- # read -r var val 00:07:22.843 04:47:29 -- accel/accel.sh@21 -- # val= 00:07:22.843 04:47:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.843 04:47:29 -- accel/accel.sh@20 -- # IFS=: 00:07:22.843 04:47:29 -- accel/accel.sh@20 -- # read -r var val 00:07:22.843 04:47:29 -- accel/accel.sh@21 -- # val= 00:07:22.843 04:47:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.843 04:47:29 -- accel/accel.sh@20 -- # IFS=: 00:07:22.843 04:47:29 -- accel/accel.sh@20 -- # read -r var val 00:07:22.843 04:47:29 -- accel/accel.sh@21 -- # val= 00:07:22.843 04:47:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.843 04:47:29 -- accel/accel.sh@20 -- # IFS=: 00:07:22.843 04:47:29 -- accel/accel.sh@20 -- # read -r var val 00:07:22.843 04:47:29 -- accel/accel.sh@21 -- # val= 00:07:22.843 04:47:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.843 04:47:29 -- accel/accel.sh@20 -- # IFS=: 00:07:22.843 04:47:29 -- accel/accel.sh@20 -- # read -r var val 00:07:22.843 04:47:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:22.843 04:47:29 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:07:22.843 04:47:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.843 00:07:22.843 real 0m4.622s 00:07:22.843 user 0m4.109s 00:07:22.843 sys 0m0.303s 00:07:22.843 04:47:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.843 ************************************ 00:07:22.843 END TEST accel_copy 00:07:22.843 ************************************ 00:07:22.843 04:47:29 -- common/autotest_common.sh@10 -- # set +x 00:07:22.843 04:47:29 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:22.843 04:47:29 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:22.843 04:47:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:22.843 04:47:29 -- common/autotest_common.sh@10 -- # set +x 00:07:22.843 ************************************ 00:07:22.843 START TEST accel_fill 00:07:22.843 ************************************ 00:07:22.843 04:47:29 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:22.843 04:47:29 -- accel/accel.sh@16 -- # local accel_opc 00:07:22.843 04:47:29 -- accel/accel.sh@17 -- # local accel_module 00:07:22.843 04:47:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:22.843 04:47:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:22.843 04:47:29 -- accel/accel.sh@12 -- # build_accel_config 00:07:22.843 04:47:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:22.843 04:47:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.843 04:47:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.843 04:47:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:22.843 04:47:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:22.843 04:47:29 -- accel/accel.sh@41 -- # local IFS=, 00:07:22.843 04:47:29 -- accel/accel.sh@42 -- # jq -r . 00:07:22.843 [2024-05-12 04:47:29.670588] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:22.843 [2024-05-12 04:47:29.670721] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59662 ] 00:07:22.843 [2024-05-12 04:47:29.827314] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.103 [2024-05-12 04:47:29.994362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.009 04:47:31 -- accel/accel.sh@18 -- # out=' 00:07:25.009 SPDK Configuration: 00:07:25.009 Core mask: 0x1 00:07:25.009 00:07:25.009 Accel Perf Configuration: 00:07:25.009 Workload Type: fill 00:07:25.009 Fill pattern: 0x80 00:07:25.009 Transfer size: 4096 bytes 00:07:25.009 Vector count 1 00:07:25.009 Module: software 00:07:25.009 Queue depth: 64 00:07:25.010 Allocate depth: 64 00:07:25.010 # threads/core: 1 00:07:25.010 Run time: 1 seconds 00:07:25.010 Verify: Yes 00:07:25.010 00:07:25.010 Running for 1 seconds... 00:07:25.010 00:07:25.010 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:25.010 ------------------------------------------------------------------------------------ 00:07:25.010 0,0 427072/s 1668 MiB/s 0 0 00:07:25.010 ==================================================================================== 00:07:25.010 Total 427072/s 1668 MiB/s 0 0' 00:07:25.010 04:47:31 -- accel/accel.sh@20 -- # IFS=: 00:07:25.010 04:47:31 -- accel/accel.sh@20 -- # read -r var val 00:07:25.010 04:47:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:25.010 04:47:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:25.010 04:47:31 -- accel/accel.sh@12 -- # build_accel_config 00:07:25.010 04:47:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:25.010 04:47:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.010 04:47:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.010 04:47:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:25.010 04:47:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:25.010 04:47:31 -- accel/accel.sh@41 -- # local IFS=, 00:07:25.010 04:47:31 -- accel/accel.sh@42 -- # jq -r . 00:07:25.010 [2024-05-12 04:47:31.885722] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:25.010 [2024-05-12 04:47:31.885887] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59688 ] 00:07:25.010 [2024-05-12 04:47:32.057480] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.269 [2024-05-12 04:47:32.247270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.528 04:47:32 -- accel/accel.sh@21 -- # val= 00:07:25.528 04:47:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.528 04:47:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.528 04:47:32 -- accel/accel.sh@20 -- # read -r var val 00:07:25.528 04:47:32 -- accel/accel.sh@21 -- # val= 00:07:25.528 04:47:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.528 04:47:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.528 04:47:32 -- accel/accel.sh@20 -- # read -r var val 00:07:25.528 04:47:32 -- accel/accel.sh@21 -- # val=0x1 00:07:25.528 04:47:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.528 04:47:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.528 04:47:32 -- accel/accel.sh@20 -- # read -r var val 00:07:25.528 04:47:32 -- accel/accel.sh@21 -- # val= 00:07:25.528 04:47:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.528 04:47:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.528 04:47:32 -- accel/accel.sh@20 -- # read -r var val 00:07:25.528 04:47:32 -- accel/accel.sh@21 -- # val= 00:07:25.528 04:47:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.528 04:47:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.528 04:47:32 -- accel/accel.sh@20 -- # read -r var val 00:07:25.528 04:47:32 -- accel/accel.sh@21 -- # val=fill 00:07:25.528 04:47:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.528 04:47:32 -- accel/accel.sh@24 -- # accel_opc=fill 00:07:25.528 04:47:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.528 04:47:32 -- accel/accel.sh@20 -- # read -r var val 00:07:25.528 04:47:32 -- accel/accel.sh@21 -- # val=0x80 00:07:25.528 04:47:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.528 04:47:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.528 04:47:32 -- accel/accel.sh@20 -- # read -r var val 00:07:25.528 04:47:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:25.528 04:47:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.528 04:47:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.528 04:47:32 -- accel/accel.sh@20 -- # read -r var val 00:07:25.528 04:47:32 -- accel/accel.sh@21 -- # val= 00:07:25.528 04:47:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.528 04:47:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.528 04:47:32 -- accel/accel.sh@20 -- # read -r var val 00:07:25.528 04:47:32 -- accel/accel.sh@21 -- # val=software 00:07:25.528 04:47:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.528 04:47:32 -- accel/accel.sh@23 -- # accel_module=software 00:07:25.528 04:47:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.528 04:47:32 -- accel/accel.sh@20 -- # read -r var val 00:07:25.528 04:47:32 -- accel/accel.sh@21 -- # val=64 00:07:25.528 04:47:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.528 04:47:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.528 04:47:32 -- accel/accel.sh@20 -- # read -r var val 00:07:25.528 04:47:32 -- accel/accel.sh@21 -- # val=64 00:07:25.528 04:47:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.528 04:47:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.528 04:47:32 -- accel/accel.sh@20 -- # read -r var val 00:07:25.528 04:47:32 -- accel/accel.sh@21 -- # val=1 00:07:25.528 04:47:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.528 04:47:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.528 04:47:32 -- accel/accel.sh@20 -- # read -r var val 00:07:25.528 04:47:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:25.528 04:47:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.528 04:47:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.528 04:47:32 -- accel/accel.sh@20 -- # read -r var val 00:07:25.528 04:47:32 -- accel/accel.sh@21 -- # val=Yes 00:07:25.528 04:47:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.528 04:47:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.528 04:47:32 -- accel/accel.sh@20 -- # read -r var val 00:07:25.528 04:47:32 -- accel/accel.sh@21 -- # val= 00:07:25.528 04:47:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.528 04:47:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.528 04:47:32 -- accel/accel.sh@20 -- # read -r var val 00:07:25.528 04:47:32 -- accel/accel.sh@21 -- # val= 00:07:25.528 04:47:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.528 04:47:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.528 04:47:32 -- accel/accel.sh@20 -- # read -r var val 00:07:27.440 04:47:34 -- accel/accel.sh@21 -- # val= 00:07:27.440 04:47:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.440 04:47:34 -- accel/accel.sh@20 -- # IFS=: 00:07:27.440 04:47:34 -- accel/accel.sh@20 -- # read -r var val 00:07:27.440 04:47:34 -- accel/accel.sh@21 -- # val= 00:07:27.440 04:47:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.440 04:47:34 -- accel/accel.sh@20 -- # IFS=: 00:07:27.440 04:47:34 -- accel/accel.sh@20 -- # read -r var val 00:07:27.440 04:47:34 -- accel/accel.sh@21 -- # val= 00:07:27.440 04:47:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.440 04:47:34 -- accel/accel.sh@20 -- # IFS=: 00:07:27.440 04:47:34 -- accel/accel.sh@20 -- # read -r var val 00:07:27.440 04:47:34 -- accel/accel.sh@21 -- # val= 00:07:27.440 04:47:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.440 04:47:34 -- accel/accel.sh@20 -- # IFS=: 00:07:27.440 04:47:34 -- accel/accel.sh@20 -- # read -r var val 00:07:27.440 04:47:34 -- accel/accel.sh@21 -- # val= 00:07:27.440 04:47:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.440 04:47:34 -- accel/accel.sh@20 -- # IFS=: 00:07:27.440 04:47:34 -- accel/accel.sh@20 -- # read -r var val 00:07:27.440 04:47:34 -- accel/accel.sh@21 -- # val= 00:07:27.440 04:47:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.440 04:47:34 -- accel/accel.sh@20 -- # IFS=: 00:07:27.440 04:47:34 -- accel/accel.sh@20 -- # read -r var val 00:07:27.440 04:47:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:27.440 ************************************ 00:07:27.440 END TEST accel_fill 00:07:27.440 ************************************ 00:07:27.440 04:47:34 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:07:27.440 04:47:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.440 00:07:27.440 real 0m4.465s 00:07:27.440 user 0m3.969s 00:07:27.440 sys 0m0.290s 00:07:27.440 04:47:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.440 04:47:34 -- common/autotest_common.sh@10 -- # set +x 00:07:27.440 04:47:34 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:27.440 04:47:34 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:27.440 04:47:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:27.440 04:47:34 -- common/autotest_common.sh@10 -- # set +x 00:07:27.440 ************************************ 00:07:27.440 START TEST accel_copy_crc32c 00:07:27.440 ************************************ 00:07:27.440 04:47:34 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:07:27.440 04:47:34 -- accel/accel.sh@16 -- # local accel_opc 00:07:27.440 04:47:34 -- accel/accel.sh@17 -- # local accel_module 00:07:27.440 04:47:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:27.440 04:47:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:27.440 04:47:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:27.440 04:47:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:27.440 04:47:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.440 04:47:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.440 04:47:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:27.440 04:47:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:27.440 04:47:34 -- accel/accel.sh@41 -- # local IFS=, 00:07:27.440 04:47:34 -- accel/accel.sh@42 -- # jq -r . 00:07:27.440 [2024-05-12 04:47:34.201469] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:27.440 [2024-05-12 04:47:34.201662] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59739 ] 00:07:27.440 [2024-05-12 04:47:34.370546] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.440 [2024-05-12 04:47:34.524489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.344 04:47:36 -- accel/accel.sh@18 -- # out=' 00:07:29.344 SPDK Configuration: 00:07:29.344 Core mask: 0x1 00:07:29.344 00:07:29.344 Accel Perf Configuration: 00:07:29.344 Workload Type: copy_crc32c 00:07:29.344 CRC-32C seed: 0 00:07:29.344 Vector size: 4096 bytes 00:07:29.344 Transfer size: 4096 bytes 00:07:29.344 Vector count 1 00:07:29.344 Module: software 00:07:29.344 Queue depth: 32 00:07:29.344 Allocate depth: 32 00:07:29.344 # threads/core: 1 00:07:29.344 Run time: 1 seconds 00:07:29.344 Verify: Yes 00:07:29.344 00:07:29.344 Running for 1 seconds... 00:07:29.344 00:07:29.344 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:29.344 ------------------------------------------------------------------------------------ 00:07:29.344 0,0 233664/s 912 MiB/s 0 0 00:07:29.344 ==================================================================================== 00:07:29.344 Total 233664/s 912 MiB/s 0 0' 00:07:29.344 04:47:36 -- accel/accel.sh@20 -- # IFS=: 00:07:29.344 04:47:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:29.344 04:47:36 -- accel/accel.sh@20 -- # read -r var val 00:07:29.344 04:47:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:29.344 04:47:36 -- accel/accel.sh@12 -- # build_accel_config 00:07:29.344 04:47:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:29.344 04:47:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.344 04:47:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.344 04:47:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:29.344 04:47:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:29.344 04:47:36 -- accel/accel.sh@41 -- # local IFS=, 00:07:29.344 04:47:36 -- accel/accel.sh@42 -- # jq -r . 00:07:29.344 [2024-05-12 04:47:36.456003] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:29.344 [2024-05-12 04:47:36.456157] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59766 ] 00:07:29.603 [2024-05-12 04:47:36.622751] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.862 [2024-05-12 04:47:36.777380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.862 04:47:36 -- accel/accel.sh@21 -- # val= 00:07:29.862 04:47:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.862 04:47:36 -- accel/accel.sh@20 -- # IFS=: 00:07:29.862 04:47:36 -- accel/accel.sh@20 -- # read -r var val 00:07:29.862 04:47:36 -- accel/accel.sh@21 -- # val= 00:07:29.862 04:47:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.862 04:47:36 -- accel/accel.sh@20 -- # IFS=: 00:07:29.862 04:47:36 -- accel/accel.sh@20 -- # read -r var val 00:07:29.862 04:47:36 -- accel/accel.sh@21 -- # val=0x1 00:07:29.862 04:47:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.862 04:47:36 -- accel/accel.sh@20 -- # IFS=: 00:07:29.862 04:47:36 -- accel/accel.sh@20 -- # read -r var val 00:07:29.862 04:47:36 -- accel/accel.sh@21 -- # val= 00:07:29.862 04:47:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.862 04:47:36 -- accel/accel.sh@20 -- # IFS=: 00:07:29.862 04:47:36 -- accel/accel.sh@20 -- # read -r var val 00:07:29.862 04:47:36 -- accel/accel.sh@21 -- # val= 00:07:29.862 04:47:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.862 04:47:36 -- accel/accel.sh@20 -- # IFS=: 00:07:29.862 04:47:36 -- accel/accel.sh@20 -- # read -r var val 00:07:29.862 04:47:36 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:29.862 04:47:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.862 04:47:36 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:29.862 04:47:36 -- accel/accel.sh@20 -- # IFS=: 00:07:29.862 04:47:36 -- accel/accel.sh@20 -- # read -r var val 00:07:29.862 04:47:36 -- accel/accel.sh@21 -- # val=0 00:07:29.862 04:47:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.862 04:47:36 -- accel/accel.sh@20 -- # IFS=: 00:07:29.862 04:47:36 -- accel/accel.sh@20 -- # read -r var val 00:07:29.862 04:47:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:29.862 04:47:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.862 04:47:36 -- accel/accel.sh@20 -- # IFS=: 00:07:29.862 04:47:36 -- accel/accel.sh@20 -- # read -r var val 00:07:29.862 04:47:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:29.862 04:47:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.862 04:47:36 -- accel/accel.sh@20 -- # IFS=: 00:07:29.862 04:47:36 -- accel/accel.sh@20 -- # read -r var val 00:07:29.862 04:47:36 -- accel/accel.sh@21 -- # val= 00:07:29.862 04:47:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.862 04:47:36 -- accel/accel.sh@20 -- # IFS=: 00:07:29.862 04:47:36 -- accel/accel.sh@20 -- # read -r var val 00:07:29.862 04:47:36 -- accel/accel.sh@21 -- # val=software 00:07:29.862 04:47:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.862 04:47:36 -- accel/accel.sh@23 -- # accel_module=software 00:07:29.862 04:47:36 -- accel/accel.sh@20 -- # IFS=: 00:07:29.862 04:47:36 -- accel/accel.sh@20 -- # read -r var val 00:07:29.862 04:47:36 -- accel/accel.sh@21 -- # val=32 00:07:29.862 04:47:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.862 04:47:36 -- accel/accel.sh@20 -- # IFS=: 00:07:29.862 04:47:36 -- accel/accel.sh@20 -- # read -r var val 00:07:29.862 04:47:36 -- accel/accel.sh@21 -- # val=32 00:07:29.862 04:47:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.862 04:47:36 -- accel/accel.sh@20 -- # IFS=: 00:07:29.862 04:47:36 -- accel/accel.sh@20 -- # read -r var val 00:07:29.862 04:47:36 -- accel/accel.sh@21 -- # val=1 00:07:29.862 04:47:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.862 04:47:36 -- accel/accel.sh@20 -- # IFS=: 00:07:29.862 04:47:36 -- accel/accel.sh@20 -- # read -r var val 00:07:29.862 04:47:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:29.862 04:47:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.862 04:47:36 -- accel/accel.sh@20 -- # IFS=: 00:07:29.862 04:47:36 -- accel/accel.sh@20 -- # read -r var val 00:07:29.862 04:47:36 -- accel/accel.sh@21 -- # val=Yes 00:07:29.862 04:47:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.862 04:47:36 -- accel/accel.sh@20 -- # IFS=: 00:07:29.862 04:47:36 -- accel/accel.sh@20 -- # read -r var val 00:07:29.862 04:47:36 -- accel/accel.sh@21 -- # val= 00:07:29.862 04:47:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.862 04:47:36 -- accel/accel.sh@20 -- # IFS=: 00:07:29.862 04:47:36 -- accel/accel.sh@20 -- # read -r var val 00:07:29.862 04:47:36 -- accel/accel.sh@21 -- # val= 00:07:29.862 04:47:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.862 04:47:36 -- accel/accel.sh@20 -- # IFS=: 00:07:29.862 04:47:36 -- accel/accel.sh@20 -- # read -r var val 00:07:31.766 04:47:38 -- accel/accel.sh@21 -- # val= 00:07:31.766 04:47:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.766 04:47:38 -- accel/accel.sh@20 -- # IFS=: 00:07:31.766 04:47:38 -- accel/accel.sh@20 -- # read -r var val 00:07:31.766 04:47:38 -- accel/accel.sh@21 -- # val= 00:07:31.766 04:47:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.766 04:47:38 -- accel/accel.sh@20 -- # IFS=: 00:07:31.766 04:47:38 -- accel/accel.sh@20 -- # read -r var val 00:07:31.766 04:47:38 -- accel/accel.sh@21 -- # val= 00:07:31.766 04:47:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.766 04:47:38 -- accel/accel.sh@20 -- # IFS=: 00:07:31.766 04:47:38 -- accel/accel.sh@20 -- # read -r var val 00:07:31.766 04:47:38 -- accel/accel.sh@21 -- # val= 00:07:31.766 04:47:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.766 04:47:38 -- accel/accel.sh@20 -- # IFS=: 00:07:31.766 04:47:38 -- accel/accel.sh@20 -- # read -r var val 00:07:31.766 04:47:38 -- accel/accel.sh@21 -- # val= 00:07:31.766 04:47:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.766 04:47:38 -- accel/accel.sh@20 -- # IFS=: 00:07:31.766 04:47:38 -- accel/accel.sh@20 -- # read -r var val 00:07:31.766 04:47:38 -- accel/accel.sh@21 -- # val= 00:07:31.766 04:47:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.766 04:47:38 -- accel/accel.sh@20 -- # IFS=: 00:07:31.766 04:47:38 -- accel/accel.sh@20 -- # read -r var val 00:07:31.766 04:47:38 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:31.766 04:47:38 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:31.766 04:47:38 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.766 00:07:31.766 real 0m4.477s 00:07:31.766 user 0m4.000s 00:07:31.766 sys 0m0.269s 00:07:31.766 04:47:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.766 ************************************ 00:07:31.766 END TEST accel_copy_crc32c 00:07:31.766 ************************************ 00:07:31.766 04:47:38 -- common/autotest_common.sh@10 -- # set +x 00:07:31.766 04:47:38 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:31.766 04:47:38 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:31.766 04:47:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:31.766 04:47:38 -- common/autotest_common.sh@10 -- # set +x 00:07:31.767 ************************************ 00:07:31.767 START TEST accel_copy_crc32c_C2 00:07:31.767 ************************************ 00:07:31.767 04:47:38 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:31.767 04:47:38 -- accel/accel.sh@16 -- # local accel_opc 00:07:31.767 04:47:38 -- accel/accel.sh@17 -- # local accel_module 00:07:31.767 04:47:38 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:31.767 04:47:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:31.767 04:47:38 -- accel/accel.sh@12 -- # build_accel_config 00:07:31.767 04:47:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:31.767 04:47:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.767 04:47:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.767 04:47:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:31.767 04:47:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:31.767 04:47:38 -- accel/accel.sh@41 -- # local IFS=, 00:07:31.767 04:47:38 -- accel/accel.sh@42 -- # jq -r . 00:07:31.767 [2024-05-12 04:47:38.730044] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:31.767 [2024-05-12 04:47:38.730263] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59807 ] 00:07:32.026 [2024-05-12 04:47:38.899072] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.026 [2024-05-12 04:47:39.049842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.930 04:47:40 -- accel/accel.sh@18 -- # out=' 00:07:33.930 SPDK Configuration: 00:07:33.930 Core mask: 0x1 00:07:33.930 00:07:33.930 Accel Perf Configuration: 00:07:33.930 Workload Type: copy_crc32c 00:07:33.930 CRC-32C seed: 0 00:07:33.930 Vector size: 4096 bytes 00:07:33.930 Transfer size: 8192 bytes 00:07:33.930 Vector count 2 00:07:33.930 Module: software 00:07:33.930 Queue depth: 32 00:07:33.930 Allocate depth: 32 00:07:33.930 # threads/core: 1 00:07:33.930 Run time: 1 seconds 00:07:33.930 Verify: Yes 00:07:33.930 00:07:33.930 Running for 1 seconds... 00:07:33.930 00:07:33.930 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:33.930 ------------------------------------------------------------------------------------ 00:07:33.930 0,0 168064/s 1313 MiB/s 0 0 00:07:33.930 ==================================================================================== 00:07:33.930 Total 168064/s 656 MiB/s 0 0' 00:07:33.930 04:47:40 -- accel/accel.sh@20 -- # IFS=: 00:07:33.930 04:47:40 -- accel/accel.sh@20 -- # read -r var val 00:07:33.930 04:47:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:33.930 04:47:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:33.930 04:47:40 -- accel/accel.sh@12 -- # build_accel_config 00:07:33.930 04:47:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:33.930 04:47:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.930 04:47:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.930 04:47:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:33.930 04:47:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:33.930 04:47:40 -- accel/accel.sh@41 -- # local IFS=, 00:07:33.930 04:47:40 -- accel/accel.sh@42 -- # jq -r . 00:07:33.930 [2024-05-12 04:47:40.946740] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:33.930 [2024-05-12 04:47:40.946898] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59833 ] 00:07:34.189 [2024-05-12 04:47:41.114185] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.189 [2024-05-12 04:47:41.269531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.449 04:47:41 -- accel/accel.sh@21 -- # val= 00:07:34.449 04:47:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.449 04:47:41 -- accel/accel.sh@20 -- # IFS=: 00:07:34.449 04:47:41 -- accel/accel.sh@20 -- # read -r var val 00:07:34.449 04:47:41 -- accel/accel.sh@21 -- # val= 00:07:34.449 04:47:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.449 04:47:41 -- accel/accel.sh@20 -- # IFS=: 00:07:34.449 04:47:41 -- accel/accel.sh@20 -- # read -r var val 00:07:34.449 04:47:41 -- accel/accel.sh@21 -- # val=0x1 00:07:34.449 04:47:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.449 04:47:41 -- accel/accel.sh@20 -- # IFS=: 00:07:34.449 04:47:41 -- accel/accel.sh@20 -- # read -r var val 00:07:34.449 04:47:41 -- accel/accel.sh@21 -- # val= 00:07:34.449 04:47:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.449 04:47:41 -- accel/accel.sh@20 -- # IFS=: 00:07:34.449 04:47:41 -- accel/accel.sh@20 -- # read -r var val 00:07:34.449 04:47:41 -- accel/accel.sh@21 -- # val= 00:07:34.449 04:47:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.449 04:47:41 -- accel/accel.sh@20 -- # IFS=: 00:07:34.449 04:47:41 -- accel/accel.sh@20 -- # read -r var val 00:07:34.449 04:47:41 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:34.449 04:47:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.449 04:47:41 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:34.449 04:47:41 -- accel/accel.sh@20 -- # IFS=: 00:07:34.449 04:47:41 -- accel/accel.sh@20 -- # read -r var val 00:07:34.449 04:47:41 -- accel/accel.sh@21 -- # val=0 00:07:34.449 04:47:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.449 04:47:41 -- accel/accel.sh@20 -- # IFS=: 00:07:34.449 04:47:41 -- accel/accel.sh@20 -- # read -r var val 00:07:34.449 04:47:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:34.449 04:47:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.449 04:47:41 -- accel/accel.sh@20 -- # IFS=: 00:07:34.449 04:47:41 -- accel/accel.sh@20 -- # read -r var val 00:07:34.449 04:47:41 -- accel/accel.sh@21 -- # val='8192 bytes' 00:07:34.449 04:47:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.449 04:47:41 -- accel/accel.sh@20 -- # IFS=: 00:07:34.449 04:47:41 -- accel/accel.sh@20 -- # read -r var val 00:07:34.449 04:47:41 -- accel/accel.sh@21 -- # val= 00:07:34.449 04:47:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.449 04:47:41 -- accel/accel.sh@20 -- # IFS=: 00:07:34.449 04:47:41 -- accel/accel.sh@20 -- # read -r var val 00:07:34.449 04:47:41 -- accel/accel.sh@21 -- # val=software 00:07:34.449 04:47:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.449 04:47:41 -- accel/accel.sh@23 -- # accel_module=software 00:07:34.449 04:47:41 -- accel/accel.sh@20 -- # IFS=: 00:07:34.449 04:47:41 -- accel/accel.sh@20 -- # read -r var val 00:07:34.449 04:47:41 -- accel/accel.sh@21 -- # val=32 00:07:34.449 04:47:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.449 04:47:41 -- accel/accel.sh@20 -- # IFS=: 00:07:34.449 04:47:41 -- accel/accel.sh@20 -- # read -r var val 00:07:34.449 04:47:41 -- accel/accel.sh@21 -- # val=32 00:07:34.449 04:47:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.449 04:47:41 -- accel/accel.sh@20 -- # IFS=: 00:07:34.449 04:47:41 -- accel/accel.sh@20 -- # read -r var val 00:07:34.449 04:47:41 -- accel/accel.sh@21 -- # val=1 00:07:34.449 04:47:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.449 04:47:41 -- accel/accel.sh@20 -- # IFS=: 00:07:34.449 04:47:41 -- accel/accel.sh@20 -- # read -r var val 00:07:34.449 04:47:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:34.449 04:47:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.449 04:47:41 -- accel/accel.sh@20 -- # IFS=: 00:07:34.449 04:47:41 -- accel/accel.sh@20 -- # read -r var val 00:07:34.449 04:47:41 -- accel/accel.sh@21 -- # val=Yes 00:07:34.449 04:47:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.449 04:47:41 -- accel/accel.sh@20 -- # IFS=: 00:07:34.449 04:47:41 -- accel/accel.sh@20 -- # read -r var val 00:07:34.449 04:47:41 -- accel/accel.sh@21 -- # val= 00:07:34.449 04:47:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.449 04:47:41 -- accel/accel.sh@20 -- # IFS=: 00:07:34.449 04:47:41 -- accel/accel.sh@20 -- # read -r var val 00:07:34.449 04:47:41 -- accel/accel.sh@21 -- # val= 00:07:34.449 04:47:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.449 04:47:41 -- accel/accel.sh@20 -- # IFS=: 00:07:34.449 04:47:41 -- accel/accel.sh@20 -- # read -r var val 00:07:36.360 04:47:43 -- accel/accel.sh@21 -- # val= 00:07:36.360 04:47:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.360 04:47:43 -- accel/accel.sh@20 -- # IFS=: 00:07:36.360 04:47:43 -- accel/accel.sh@20 -- # read -r var val 00:07:36.360 04:47:43 -- accel/accel.sh@21 -- # val= 00:07:36.360 04:47:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.360 04:47:43 -- accel/accel.sh@20 -- # IFS=: 00:07:36.360 04:47:43 -- accel/accel.sh@20 -- # read -r var val 00:07:36.360 04:47:43 -- accel/accel.sh@21 -- # val= 00:07:36.360 04:47:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.360 04:47:43 -- accel/accel.sh@20 -- # IFS=: 00:07:36.360 04:47:43 -- accel/accel.sh@20 -- # read -r var val 00:07:36.360 04:47:43 -- accel/accel.sh@21 -- # val= 00:07:36.360 04:47:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.360 04:47:43 -- accel/accel.sh@20 -- # IFS=: 00:07:36.360 04:47:43 -- accel/accel.sh@20 -- # read -r var val 00:07:36.360 04:47:43 -- accel/accel.sh@21 -- # val= 00:07:36.360 04:47:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.360 04:47:43 -- accel/accel.sh@20 -- # IFS=: 00:07:36.360 04:47:43 -- accel/accel.sh@20 -- # read -r var val 00:07:36.360 04:47:43 -- accel/accel.sh@21 -- # val= 00:07:36.360 04:47:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.360 04:47:43 -- accel/accel.sh@20 -- # IFS=: 00:07:36.360 04:47:43 -- accel/accel.sh@20 -- # read -r var val 00:07:36.360 04:47:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:36.360 04:47:43 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:36.360 04:47:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:36.360 00:07:36.360 real 0m4.438s 00:07:36.360 user 0m3.943s 00:07:36.360 sys 0m0.289s 00:07:36.360 04:47:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.360 04:47:43 -- common/autotest_common.sh@10 -- # set +x 00:07:36.360 ************************************ 00:07:36.360 END TEST accel_copy_crc32c_C2 00:07:36.360 ************************************ 00:07:36.360 04:47:43 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:36.360 04:47:43 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:36.360 04:47:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:36.360 04:47:43 -- common/autotest_common.sh@10 -- # set +x 00:07:36.360 ************************************ 00:07:36.360 START TEST accel_dualcast 00:07:36.360 ************************************ 00:07:36.360 04:47:43 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:07:36.360 04:47:43 -- accel/accel.sh@16 -- # local accel_opc 00:07:36.360 04:47:43 -- accel/accel.sh@17 -- # local accel_module 00:07:36.360 04:47:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:07:36.360 04:47:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:36.360 04:47:43 -- accel/accel.sh@12 -- # build_accel_config 00:07:36.360 04:47:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:36.360 04:47:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.360 04:47:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.360 04:47:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:36.360 04:47:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:36.360 04:47:43 -- accel/accel.sh@41 -- # local IFS=, 00:07:36.360 04:47:43 -- accel/accel.sh@42 -- # jq -r . 00:07:36.360 [2024-05-12 04:47:43.227873] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:36.360 [2024-05-12 04:47:43.228019] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59874 ] 00:07:36.360 [2024-05-12 04:47:43.383302] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.618 [2024-05-12 04:47:43.584549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.523 04:47:45 -- accel/accel.sh@18 -- # out=' 00:07:38.523 SPDK Configuration: 00:07:38.523 Core mask: 0x1 00:07:38.523 00:07:38.523 Accel Perf Configuration: 00:07:38.523 Workload Type: dualcast 00:07:38.523 Transfer size: 4096 bytes 00:07:38.523 Vector count 1 00:07:38.523 Module: software 00:07:38.523 Queue depth: 32 00:07:38.523 Allocate depth: 32 00:07:38.523 # threads/core: 1 00:07:38.523 Run time: 1 seconds 00:07:38.523 Verify: Yes 00:07:38.523 00:07:38.523 Running for 1 seconds... 00:07:38.523 00:07:38.523 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:38.523 ------------------------------------------------------------------------------------ 00:07:38.523 0,0 304288/s 1188 MiB/s 0 0 00:07:38.523 ==================================================================================== 00:07:38.523 Total 304288/s 1188 MiB/s 0 0' 00:07:38.523 04:47:45 -- accel/accel.sh@20 -- # IFS=: 00:07:38.523 04:47:45 -- accel/accel.sh@20 -- # read -r var val 00:07:38.523 04:47:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:38.523 04:47:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:38.523 04:47:45 -- accel/accel.sh@12 -- # build_accel_config 00:07:38.523 04:47:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:38.523 04:47:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.523 04:47:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.523 04:47:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:38.523 04:47:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:38.523 04:47:45 -- accel/accel.sh@41 -- # local IFS=, 00:07:38.523 04:47:45 -- accel/accel.sh@42 -- # jq -r . 00:07:38.523 [2024-05-12 04:47:45.525680] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:38.523 [2024-05-12 04:47:45.525840] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59906 ] 00:07:38.782 [2024-05-12 04:47:45.690590] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.782 [2024-05-12 04:47:45.865409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.041 04:47:46 -- accel/accel.sh@21 -- # val= 00:07:39.041 04:47:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.041 04:47:46 -- accel/accel.sh@20 -- # IFS=: 00:07:39.041 04:47:46 -- accel/accel.sh@20 -- # read -r var val 00:07:39.041 04:47:46 -- accel/accel.sh@21 -- # val= 00:07:39.041 04:47:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.041 04:47:46 -- accel/accel.sh@20 -- # IFS=: 00:07:39.041 04:47:46 -- accel/accel.sh@20 -- # read -r var val 00:07:39.041 04:47:46 -- accel/accel.sh@21 -- # val=0x1 00:07:39.041 04:47:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.041 04:47:46 -- accel/accel.sh@20 -- # IFS=: 00:07:39.041 04:47:46 -- accel/accel.sh@20 -- # read -r var val 00:07:39.041 04:47:46 -- accel/accel.sh@21 -- # val= 00:07:39.041 04:47:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.041 04:47:46 -- accel/accel.sh@20 -- # IFS=: 00:07:39.041 04:47:46 -- accel/accel.sh@20 -- # read -r var val 00:07:39.041 04:47:46 -- accel/accel.sh@21 -- # val= 00:07:39.041 04:47:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.041 04:47:46 -- accel/accel.sh@20 -- # IFS=: 00:07:39.041 04:47:46 -- accel/accel.sh@20 -- # read -r var val 00:07:39.041 04:47:46 -- accel/accel.sh@21 -- # val=dualcast 00:07:39.041 04:47:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.041 04:47:46 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:07:39.041 04:47:46 -- accel/accel.sh@20 -- # IFS=: 00:07:39.041 04:47:46 -- accel/accel.sh@20 -- # read -r var val 00:07:39.041 04:47:46 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:39.041 04:47:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.041 04:47:46 -- accel/accel.sh@20 -- # IFS=: 00:07:39.041 04:47:46 -- accel/accel.sh@20 -- # read -r var val 00:07:39.041 04:47:46 -- accel/accel.sh@21 -- # val= 00:07:39.041 04:47:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.041 04:47:46 -- accel/accel.sh@20 -- # IFS=: 00:07:39.041 04:47:46 -- accel/accel.sh@20 -- # read -r var val 00:07:39.041 04:47:46 -- accel/accel.sh@21 -- # val=software 00:07:39.041 04:47:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.041 04:47:46 -- accel/accel.sh@23 -- # accel_module=software 00:07:39.041 04:47:46 -- accel/accel.sh@20 -- # IFS=: 00:07:39.041 04:47:46 -- accel/accel.sh@20 -- # read -r var val 00:07:39.041 04:47:46 -- accel/accel.sh@21 -- # val=32 00:07:39.041 04:47:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.041 04:47:46 -- accel/accel.sh@20 -- # IFS=: 00:07:39.041 04:47:46 -- accel/accel.sh@20 -- # read -r var val 00:07:39.041 04:47:46 -- accel/accel.sh@21 -- # val=32 00:07:39.041 04:47:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.041 04:47:46 -- accel/accel.sh@20 -- # IFS=: 00:07:39.041 04:47:46 -- accel/accel.sh@20 -- # read -r var val 00:07:39.041 04:47:46 -- accel/accel.sh@21 -- # val=1 00:07:39.041 04:47:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.041 04:47:46 -- accel/accel.sh@20 -- # IFS=: 00:07:39.041 04:47:46 -- accel/accel.sh@20 -- # read -r var val 00:07:39.041 04:47:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:39.041 04:47:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.041 04:47:46 -- accel/accel.sh@20 -- # IFS=: 00:07:39.041 04:47:46 -- accel/accel.sh@20 -- # read -r var val 00:07:39.041 04:47:46 -- accel/accel.sh@21 -- # val=Yes 00:07:39.041 04:47:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.041 04:47:46 -- accel/accel.sh@20 -- # IFS=: 00:07:39.041 04:47:46 -- accel/accel.sh@20 -- # read -r var val 00:07:39.041 04:47:46 -- accel/accel.sh@21 -- # val= 00:07:39.041 04:47:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.041 04:47:46 -- accel/accel.sh@20 -- # IFS=: 00:07:39.041 04:47:46 -- accel/accel.sh@20 -- # read -r var val 00:07:39.041 04:47:46 -- accel/accel.sh@21 -- # val= 00:07:39.041 04:47:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.041 04:47:46 -- accel/accel.sh@20 -- # IFS=: 00:07:39.041 04:47:46 -- accel/accel.sh@20 -- # read -r var val 00:07:40.966 04:47:47 -- accel/accel.sh@21 -- # val= 00:07:40.966 04:47:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.966 04:47:47 -- accel/accel.sh@20 -- # IFS=: 00:07:40.966 04:47:47 -- accel/accel.sh@20 -- # read -r var val 00:07:40.966 04:47:47 -- accel/accel.sh@21 -- # val= 00:07:40.966 04:47:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.966 04:47:47 -- accel/accel.sh@20 -- # IFS=: 00:07:40.966 04:47:47 -- accel/accel.sh@20 -- # read -r var val 00:07:40.966 04:47:47 -- accel/accel.sh@21 -- # val= 00:07:40.966 04:47:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.966 04:47:47 -- accel/accel.sh@20 -- # IFS=: 00:07:40.966 04:47:47 -- accel/accel.sh@20 -- # read -r var val 00:07:40.966 04:47:47 -- accel/accel.sh@21 -- # val= 00:07:40.966 04:47:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.966 04:47:47 -- accel/accel.sh@20 -- # IFS=: 00:07:40.966 04:47:47 -- accel/accel.sh@20 -- # read -r var val 00:07:40.966 04:47:47 -- accel/accel.sh@21 -- # val= 00:07:40.966 04:47:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.966 04:47:47 -- accel/accel.sh@20 -- # IFS=: 00:07:40.966 04:47:47 -- accel/accel.sh@20 -- # read -r var val 00:07:40.966 04:47:47 -- accel/accel.sh@21 -- # val= 00:07:40.966 04:47:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.966 04:47:47 -- accel/accel.sh@20 -- # IFS=: 00:07:40.966 04:47:47 -- accel/accel.sh@20 -- # read -r var val 00:07:40.966 04:47:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:40.966 04:47:47 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:07:40.966 04:47:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:40.966 00:07:40.966 real 0m4.580s 00:07:40.966 user 0m4.086s 00:07:40.966 sys 0m0.278s 00:07:40.966 04:47:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.966 04:47:47 -- common/autotest_common.sh@10 -- # set +x 00:07:40.966 ************************************ 00:07:40.966 END TEST accel_dualcast 00:07:40.966 ************************************ 00:07:40.966 04:47:47 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:40.966 04:47:47 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:40.966 04:47:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:40.966 04:47:47 -- common/autotest_common.sh@10 -- # set +x 00:07:40.966 ************************************ 00:07:40.966 START TEST accel_compare 00:07:40.966 ************************************ 00:07:40.966 04:47:47 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:07:40.966 04:47:47 -- accel/accel.sh@16 -- # local accel_opc 00:07:40.966 04:47:47 -- accel/accel.sh@17 -- # local accel_module 00:07:40.966 04:47:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:07:40.966 04:47:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:40.966 04:47:47 -- accel/accel.sh@12 -- # build_accel_config 00:07:40.966 04:47:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:40.966 04:47:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.966 04:47:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.966 04:47:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:40.966 04:47:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:40.966 04:47:47 -- accel/accel.sh@41 -- # local IFS=, 00:07:40.966 04:47:47 -- accel/accel.sh@42 -- # jq -r . 00:07:40.966 [2024-05-12 04:47:47.866192] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:40.966 [2024-05-12 04:47:47.866356] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59947 ] 00:07:40.966 [2024-05-12 04:47:48.033513] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.226 [2024-05-12 04:47:48.188734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.132 04:47:50 -- accel/accel.sh@18 -- # out=' 00:07:43.132 SPDK Configuration: 00:07:43.132 Core mask: 0x1 00:07:43.132 00:07:43.132 Accel Perf Configuration: 00:07:43.132 Workload Type: compare 00:07:43.132 Transfer size: 4096 bytes 00:07:43.132 Vector count 1 00:07:43.132 Module: software 00:07:43.132 Queue depth: 32 00:07:43.132 Allocate depth: 32 00:07:43.132 # threads/core: 1 00:07:43.132 Run time: 1 seconds 00:07:43.132 Verify: Yes 00:07:43.132 00:07:43.132 Running for 1 seconds... 00:07:43.132 00:07:43.132 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:43.132 ------------------------------------------------------------------------------------ 00:07:43.132 0,0 377632/s 1475 MiB/s 0 0 00:07:43.132 ==================================================================================== 00:07:43.132 Total 377632/s 1475 MiB/s 0 0' 00:07:43.132 04:47:50 -- accel/accel.sh@20 -- # IFS=: 00:07:43.132 04:47:50 -- accel/accel.sh@20 -- # read -r var val 00:07:43.132 04:47:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:43.132 04:47:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:43.132 04:47:50 -- accel/accel.sh@12 -- # build_accel_config 00:07:43.132 04:47:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:43.132 04:47:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.132 04:47:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.132 04:47:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:43.132 04:47:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:43.132 04:47:50 -- accel/accel.sh@41 -- # local IFS=, 00:07:43.132 04:47:50 -- accel/accel.sh@42 -- # jq -r . 00:07:43.132 [2024-05-12 04:47:50.141385] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:43.132 [2024-05-12 04:47:50.141557] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59977 ] 00:07:43.390 [2024-05-12 04:47:50.310960] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.390 [2024-05-12 04:47:50.483155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.651 04:47:50 -- accel/accel.sh@21 -- # val= 00:07:43.651 04:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.651 04:47:50 -- accel/accel.sh@20 -- # IFS=: 00:07:43.651 04:47:50 -- accel/accel.sh@20 -- # read -r var val 00:07:43.651 04:47:50 -- accel/accel.sh@21 -- # val= 00:07:43.651 04:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.651 04:47:50 -- accel/accel.sh@20 -- # IFS=: 00:07:43.651 04:47:50 -- accel/accel.sh@20 -- # read -r var val 00:07:43.651 04:47:50 -- accel/accel.sh@21 -- # val=0x1 00:07:43.651 04:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.651 04:47:50 -- accel/accel.sh@20 -- # IFS=: 00:07:43.651 04:47:50 -- accel/accel.sh@20 -- # read -r var val 00:07:43.651 04:47:50 -- accel/accel.sh@21 -- # val= 00:07:43.651 04:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.651 04:47:50 -- accel/accel.sh@20 -- # IFS=: 00:07:43.651 04:47:50 -- accel/accel.sh@20 -- # read -r var val 00:07:43.651 04:47:50 -- accel/accel.sh@21 -- # val= 00:07:43.651 04:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.651 04:47:50 -- accel/accel.sh@20 -- # IFS=: 00:07:43.651 04:47:50 -- accel/accel.sh@20 -- # read -r var val 00:07:43.651 04:47:50 -- accel/accel.sh@21 -- # val=compare 00:07:43.651 04:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.651 04:47:50 -- accel/accel.sh@24 -- # accel_opc=compare 00:07:43.651 04:47:50 -- accel/accel.sh@20 -- # IFS=: 00:07:43.651 04:47:50 -- accel/accel.sh@20 -- # read -r var val 00:07:43.651 04:47:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:43.651 04:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.651 04:47:50 -- accel/accel.sh@20 -- # IFS=: 00:07:43.651 04:47:50 -- accel/accel.sh@20 -- # read -r var val 00:07:43.651 04:47:50 -- accel/accel.sh@21 -- # val= 00:07:43.651 04:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.651 04:47:50 -- accel/accel.sh@20 -- # IFS=: 00:07:43.651 04:47:50 -- accel/accel.sh@20 -- # read -r var val 00:07:43.651 04:47:50 -- accel/accel.sh@21 -- # val=software 00:07:43.651 04:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.651 04:47:50 -- accel/accel.sh@23 -- # accel_module=software 00:07:43.651 04:47:50 -- accel/accel.sh@20 -- # IFS=: 00:07:43.651 04:47:50 -- accel/accel.sh@20 -- # read -r var val 00:07:43.651 04:47:50 -- accel/accel.sh@21 -- # val=32 00:07:43.651 04:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.651 04:47:50 -- accel/accel.sh@20 -- # IFS=: 00:07:43.651 04:47:50 -- accel/accel.sh@20 -- # read -r var val 00:07:43.651 04:47:50 -- accel/accel.sh@21 -- # val=32 00:07:43.651 04:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.651 04:47:50 -- accel/accel.sh@20 -- # IFS=: 00:07:43.651 04:47:50 -- accel/accel.sh@20 -- # read -r var val 00:07:43.651 04:47:50 -- accel/accel.sh@21 -- # val=1 00:07:43.651 04:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.651 04:47:50 -- accel/accel.sh@20 -- # IFS=: 00:07:43.651 04:47:50 -- accel/accel.sh@20 -- # read -r var val 00:07:43.651 04:47:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:43.651 04:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.651 04:47:50 -- accel/accel.sh@20 -- # IFS=: 00:07:43.651 04:47:50 -- accel/accel.sh@20 -- # read -r var val 00:07:43.651 04:47:50 -- accel/accel.sh@21 -- # val=Yes 00:07:43.651 04:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.651 04:47:50 -- accel/accel.sh@20 -- # IFS=: 00:07:43.651 04:47:50 -- accel/accel.sh@20 -- # read -r var val 00:07:43.651 04:47:50 -- accel/accel.sh@21 -- # val= 00:07:43.651 04:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.651 04:47:50 -- accel/accel.sh@20 -- # IFS=: 00:07:43.651 04:47:50 -- accel/accel.sh@20 -- # read -r var val 00:07:43.651 04:47:50 -- accel/accel.sh@21 -- # val= 00:07:43.651 04:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.651 04:47:50 -- accel/accel.sh@20 -- # IFS=: 00:07:43.651 04:47:50 -- accel/accel.sh@20 -- # read -r var val 00:07:45.554 04:47:52 -- accel/accel.sh@21 -- # val= 00:07:45.554 04:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.554 04:47:52 -- accel/accel.sh@20 -- # IFS=: 00:07:45.554 04:47:52 -- accel/accel.sh@20 -- # read -r var val 00:07:45.554 04:47:52 -- accel/accel.sh@21 -- # val= 00:07:45.554 04:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.554 04:47:52 -- accel/accel.sh@20 -- # IFS=: 00:07:45.554 04:47:52 -- accel/accel.sh@20 -- # read -r var val 00:07:45.554 04:47:52 -- accel/accel.sh@21 -- # val= 00:07:45.554 04:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.554 04:47:52 -- accel/accel.sh@20 -- # IFS=: 00:07:45.554 04:47:52 -- accel/accel.sh@20 -- # read -r var val 00:07:45.554 04:47:52 -- accel/accel.sh@21 -- # val= 00:07:45.554 04:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.554 04:47:52 -- accel/accel.sh@20 -- # IFS=: 00:07:45.554 04:47:52 -- accel/accel.sh@20 -- # read -r var val 00:07:45.554 04:47:52 -- accel/accel.sh@21 -- # val= 00:07:45.554 04:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.554 04:47:52 -- accel/accel.sh@20 -- # IFS=: 00:07:45.554 04:47:52 -- accel/accel.sh@20 -- # read -r var val 00:07:45.554 04:47:52 -- accel/accel.sh@21 -- # val= 00:07:45.554 04:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.554 04:47:52 -- accel/accel.sh@20 -- # IFS=: 00:07:45.554 04:47:52 -- accel/accel.sh@20 -- # read -r var val 00:07:45.554 04:47:52 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:45.554 04:47:52 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:07:45.554 04:47:52 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:45.554 00:07:45.554 real 0m4.616s 00:07:45.554 user 0m4.109s 00:07:45.554 sys 0m0.295s 00:07:45.554 04:47:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.554 04:47:52 -- common/autotest_common.sh@10 -- # set +x 00:07:45.554 ************************************ 00:07:45.554 END TEST accel_compare 00:07:45.554 ************************************ 00:07:45.554 04:47:52 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:45.554 04:47:52 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:45.554 04:47:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:45.554 04:47:52 -- common/autotest_common.sh@10 -- # set +x 00:07:45.554 ************************************ 00:07:45.554 START TEST accel_xor 00:07:45.554 ************************************ 00:07:45.554 04:47:52 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:07:45.554 04:47:52 -- accel/accel.sh@16 -- # local accel_opc 00:07:45.554 04:47:52 -- accel/accel.sh@17 -- # local accel_module 00:07:45.554 04:47:52 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:07:45.554 04:47:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:45.554 04:47:52 -- accel/accel.sh@12 -- # build_accel_config 00:07:45.554 04:47:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:45.554 04:47:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.554 04:47:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.554 04:47:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:45.554 04:47:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:45.554 04:47:52 -- accel/accel.sh@41 -- # local IFS=, 00:07:45.554 04:47:52 -- accel/accel.sh@42 -- # jq -r . 00:07:45.554 [2024-05-12 04:47:52.530114] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:45.554 [2024-05-12 04:47:52.530281] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60025 ] 00:07:45.812 [2024-05-12 04:47:52.698023] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.812 [2024-05-12 04:47:52.872464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.713 04:47:54 -- accel/accel.sh@18 -- # out=' 00:07:47.713 SPDK Configuration: 00:07:47.713 Core mask: 0x1 00:07:47.713 00:07:47.713 Accel Perf Configuration: 00:07:47.713 Workload Type: xor 00:07:47.713 Source buffers: 2 00:07:47.713 Transfer size: 4096 bytes 00:07:47.713 Vector count 1 00:07:47.714 Module: software 00:07:47.714 Queue depth: 32 00:07:47.714 Allocate depth: 32 00:07:47.714 # threads/core: 1 00:07:47.714 Run time: 1 seconds 00:07:47.714 Verify: Yes 00:07:47.714 00:07:47.714 Running for 1 seconds... 00:07:47.714 00:07:47.714 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:47.714 ------------------------------------------------------------------------------------ 00:07:47.714 0,0 208768/s 815 MiB/s 0 0 00:07:47.714 ==================================================================================== 00:07:47.714 Total 208768/s 815 MiB/s 0 0' 00:07:47.714 04:47:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:47.714 04:47:54 -- accel/accel.sh@20 -- # IFS=: 00:07:47.714 04:47:54 -- accel/accel.sh@20 -- # read -r var val 00:07:47.714 04:47:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:47.714 04:47:54 -- accel/accel.sh@12 -- # build_accel_config 00:07:47.714 04:47:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:47.714 04:47:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.714 04:47:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.714 04:47:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:47.714 04:47:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:47.714 04:47:54 -- accel/accel.sh@41 -- # local IFS=, 00:07:47.714 04:47:54 -- accel/accel.sh@42 -- # jq -r . 00:07:47.972 [2024-05-12 04:47:54.865185] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:47.972 [2024-05-12 04:47:54.865381] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60051 ] 00:07:47.972 [2024-05-12 04:47:55.035627] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.230 [2024-05-12 04:47:55.223432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.488 04:47:55 -- accel/accel.sh@21 -- # val= 00:07:48.488 04:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.488 04:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:48.488 04:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:48.488 04:47:55 -- accel/accel.sh@21 -- # val= 00:07:48.488 04:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.488 04:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:48.488 04:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:48.488 04:47:55 -- accel/accel.sh@21 -- # val=0x1 00:07:48.488 04:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.488 04:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:48.488 04:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:48.488 04:47:55 -- accel/accel.sh@21 -- # val= 00:07:48.488 04:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.488 04:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:48.488 04:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:48.488 04:47:55 -- accel/accel.sh@21 -- # val= 00:07:48.488 04:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.488 04:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:48.488 04:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:48.488 04:47:55 -- accel/accel.sh@21 -- # val=xor 00:07:48.488 04:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.488 04:47:55 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:48.488 04:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:48.488 04:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:48.488 04:47:55 -- accel/accel.sh@21 -- # val=2 00:07:48.488 04:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.488 04:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:48.488 04:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:48.488 04:47:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:48.488 04:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.488 04:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:48.488 04:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:48.488 04:47:55 -- accel/accel.sh@21 -- # val= 00:07:48.488 04:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.488 04:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:48.488 04:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:48.488 04:47:55 -- accel/accel.sh@21 -- # val=software 00:07:48.488 04:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.488 04:47:55 -- accel/accel.sh@23 -- # accel_module=software 00:07:48.488 04:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:48.488 04:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:48.488 04:47:55 -- accel/accel.sh@21 -- # val=32 00:07:48.488 04:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.488 04:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:48.488 04:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:48.488 04:47:55 -- accel/accel.sh@21 -- # val=32 00:07:48.488 04:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.488 04:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:48.488 04:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:48.488 04:47:55 -- accel/accel.sh@21 -- # val=1 00:07:48.488 04:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.488 04:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:48.488 04:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:48.488 04:47:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:48.488 04:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.488 04:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:48.488 04:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:48.488 04:47:55 -- accel/accel.sh@21 -- # val=Yes 00:07:48.488 04:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.488 04:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:48.488 04:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:48.488 04:47:55 -- accel/accel.sh@21 -- # val= 00:07:48.488 04:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.488 04:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:48.488 04:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:48.488 04:47:55 -- accel/accel.sh@21 -- # val= 00:07:48.488 04:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.488 04:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:48.488 04:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:50.393 04:47:57 -- accel/accel.sh@21 -- # val= 00:07:50.393 04:47:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.393 04:47:57 -- accel/accel.sh@20 -- # IFS=: 00:07:50.393 04:47:57 -- accel/accel.sh@20 -- # read -r var val 00:07:50.393 04:47:57 -- accel/accel.sh@21 -- # val= 00:07:50.393 04:47:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.393 04:47:57 -- accel/accel.sh@20 -- # IFS=: 00:07:50.393 04:47:57 -- accel/accel.sh@20 -- # read -r var val 00:07:50.393 04:47:57 -- accel/accel.sh@21 -- # val= 00:07:50.393 04:47:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.393 04:47:57 -- accel/accel.sh@20 -- # IFS=: 00:07:50.393 04:47:57 -- accel/accel.sh@20 -- # read -r var val 00:07:50.393 04:47:57 -- accel/accel.sh@21 -- # val= 00:07:50.393 04:47:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.393 04:47:57 -- accel/accel.sh@20 -- # IFS=: 00:07:50.393 04:47:57 -- accel/accel.sh@20 -- # read -r var val 00:07:50.393 04:47:57 -- accel/accel.sh@21 -- # val= 00:07:50.393 04:47:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.393 04:47:57 -- accel/accel.sh@20 -- # IFS=: 00:07:50.393 04:47:57 -- accel/accel.sh@20 -- # read -r var val 00:07:50.393 04:47:57 -- accel/accel.sh@21 -- # val= 00:07:50.393 04:47:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.393 04:47:57 -- accel/accel.sh@20 -- # IFS=: 00:07:50.393 04:47:57 -- accel/accel.sh@20 -- # read -r var val 00:07:50.393 04:47:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:50.393 04:47:57 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:50.393 04:47:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:50.393 00:07:50.393 real 0m4.658s 00:07:50.393 user 0m4.154s 00:07:50.393 sys 0m0.293s 00:07:50.393 04:47:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.393 04:47:57 -- common/autotest_common.sh@10 -- # set +x 00:07:50.393 ************************************ 00:07:50.393 END TEST accel_xor 00:07:50.393 ************************************ 00:07:50.393 04:47:57 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:50.393 04:47:57 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:50.393 04:47:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:50.393 04:47:57 -- common/autotest_common.sh@10 -- # set +x 00:07:50.393 ************************************ 00:07:50.393 START TEST accel_xor 00:07:50.393 ************************************ 00:07:50.393 04:47:57 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:07:50.393 04:47:57 -- accel/accel.sh@16 -- # local accel_opc 00:07:50.393 04:47:57 -- accel/accel.sh@17 -- # local accel_module 00:07:50.393 04:47:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:50.393 04:47:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:50.393 04:47:57 -- accel/accel.sh@12 -- # build_accel_config 00:07:50.393 04:47:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:50.393 04:47:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.393 04:47:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.393 04:47:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:50.393 04:47:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:50.393 04:47:57 -- accel/accel.sh@41 -- # local IFS=, 00:07:50.393 04:47:57 -- accel/accel.sh@42 -- # jq -r . 00:07:50.393 [2024-05-12 04:47:57.244728] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:50.393 [2024-05-12 04:47:57.244873] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60092 ] 00:07:50.393 [2024-05-12 04:47:57.415349] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.652 [2024-05-12 04:47:57.593726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.613 04:47:59 -- accel/accel.sh@18 -- # out=' 00:07:52.613 SPDK Configuration: 00:07:52.613 Core mask: 0x1 00:07:52.613 00:07:52.613 Accel Perf Configuration: 00:07:52.613 Workload Type: xor 00:07:52.613 Source buffers: 3 00:07:52.613 Transfer size: 4096 bytes 00:07:52.613 Vector count 1 00:07:52.613 Module: software 00:07:52.613 Queue depth: 32 00:07:52.613 Allocate depth: 32 00:07:52.613 # threads/core: 1 00:07:52.613 Run time: 1 seconds 00:07:52.613 Verify: Yes 00:07:52.613 00:07:52.613 Running for 1 seconds... 00:07:52.613 00:07:52.613 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:52.613 ------------------------------------------------------------------------------------ 00:07:52.613 0,0 199648/s 779 MiB/s 0 0 00:07:52.613 ==================================================================================== 00:07:52.613 Total 199648/s 779 MiB/s 0 0' 00:07:52.613 04:47:59 -- accel/accel.sh@20 -- # IFS=: 00:07:52.613 04:47:59 -- accel/accel.sh@20 -- # read -r var val 00:07:52.613 04:47:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:52.613 04:47:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:52.613 04:47:59 -- accel/accel.sh@12 -- # build_accel_config 00:07:52.613 04:47:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:52.613 04:47:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:52.613 04:47:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.613 04:47:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:52.613 04:47:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:52.613 04:47:59 -- accel/accel.sh@41 -- # local IFS=, 00:07:52.613 04:47:59 -- accel/accel.sh@42 -- # jq -r . 00:07:52.613 [2024-05-12 04:47:59.552960] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:52.613 [2024-05-12 04:47:59.553108] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60119 ] 00:07:52.613 [2024-05-12 04:47:59.723603] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.873 [2024-05-12 04:47:59.904432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.132 04:48:00 -- accel/accel.sh@21 -- # val= 00:07:53.132 04:48:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.132 04:48:00 -- accel/accel.sh@20 -- # IFS=: 00:07:53.132 04:48:00 -- accel/accel.sh@20 -- # read -r var val 00:07:53.132 04:48:00 -- accel/accel.sh@21 -- # val= 00:07:53.132 04:48:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.132 04:48:00 -- accel/accel.sh@20 -- # IFS=: 00:07:53.132 04:48:00 -- accel/accel.sh@20 -- # read -r var val 00:07:53.132 04:48:00 -- accel/accel.sh@21 -- # val=0x1 00:07:53.132 04:48:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.133 04:48:00 -- accel/accel.sh@20 -- # IFS=: 00:07:53.133 04:48:00 -- accel/accel.sh@20 -- # read -r var val 00:07:53.133 04:48:00 -- accel/accel.sh@21 -- # val= 00:07:53.133 04:48:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.133 04:48:00 -- accel/accel.sh@20 -- # IFS=: 00:07:53.133 04:48:00 -- accel/accel.sh@20 -- # read -r var val 00:07:53.133 04:48:00 -- accel/accel.sh@21 -- # val= 00:07:53.133 04:48:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.133 04:48:00 -- accel/accel.sh@20 -- # IFS=: 00:07:53.133 04:48:00 -- accel/accel.sh@20 -- # read -r var val 00:07:53.133 04:48:00 -- accel/accel.sh@21 -- # val=xor 00:07:53.133 04:48:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.133 04:48:00 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:53.133 04:48:00 -- accel/accel.sh@20 -- # IFS=: 00:07:53.133 04:48:00 -- accel/accel.sh@20 -- # read -r var val 00:07:53.133 04:48:00 -- accel/accel.sh@21 -- # val=3 00:07:53.133 04:48:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.133 04:48:00 -- accel/accel.sh@20 -- # IFS=: 00:07:53.133 04:48:00 -- accel/accel.sh@20 -- # read -r var val 00:07:53.133 04:48:00 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:53.133 04:48:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.133 04:48:00 -- accel/accel.sh@20 -- # IFS=: 00:07:53.133 04:48:00 -- accel/accel.sh@20 -- # read -r var val 00:07:53.133 04:48:00 -- accel/accel.sh@21 -- # val= 00:07:53.133 04:48:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.133 04:48:00 -- accel/accel.sh@20 -- # IFS=: 00:07:53.133 04:48:00 -- accel/accel.sh@20 -- # read -r var val 00:07:53.133 04:48:00 -- accel/accel.sh@21 -- # val=software 00:07:53.133 04:48:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.133 04:48:00 -- accel/accel.sh@23 -- # accel_module=software 00:07:53.133 04:48:00 -- accel/accel.sh@20 -- # IFS=: 00:07:53.133 04:48:00 -- accel/accel.sh@20 -- # read -r var val 00:07:53.133 04:48:00 -- accel/accel.sh@21 -- # val=32 00:07:53.133 04:48:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.133 04:48:00 -- accel/accel.sh@20 -- # IFS=: 00:07:53.133 04:48:00 -- accel/accel.sh@20 -- # read -r var val 00:07:53.133 04:48:00 -- accel/accel.sh@21 -- # val=32 00:07:53.133 04:48:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.133 04:48:00 -- accel/accel.sh@20 -- # IFS=: 00:07:53.133 04:48:00 -- accel/accel.sh@20 -- # read -r var val 00:07:53.133 04:48:00 -- accel/accel.sh@21 -- # val=1 00:07:53.133 04:48:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.133 04:48:00 -- accel/accel.sh@20 -- # IFS=: 00:07:53.133 04:48:00 -- accel/accel.sh@20 -- # read -r var val 00:07:53.133 04:48:00 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:53.133 04:48:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.133 04:48:00 -- accel/accel.sh@20 -- # IFS=: 00:07:53.133 04:48:00 -- accel/accel.sh@20 -- # read -r var val 00:07:53.133 04:48:00 -- accel/accel.sh@21 -- # val=Yes 00:07:53.133 04:48:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.133 04:48:00 -- accel/accel.sh@20 -- # IFS=: 00:07:53.133 04:48:00 -- accel/accel.sh@20 -- # read -r var val 00:07:53.133 04:48:00 -- accel/accel.sh@21 -- # val= 00:07:53.133 04:48:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.133 04:48:00 -- accel/accel.sh@20 -- # IFS=: 00:07:53.133 04:48:00 -- accel/accel.sh@20 -- # read -r var val 00:07:53.133 04:48:00 -- accel/accel.sh@21 -- # val= 00:07:53.133 04:48:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.133 04:48:00 -- accel/accel.sh@20 -- # IFS=: 00:07:53.133 04:48:00 -- accel/accel.sh@20 -- # read -r var val 00:07:55.036 04:48:01 -- accel/accel.sh@21 -- # val= 00:07:55.036 04:48:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.036 04:48:01 -- accel/accel.sh@20 -- # IFS=: 00:07:55.036 04:48:01 -- accel/accel.sh@20 -- # read -r var val 00:07:55.036 04:48:01 -- accel/accel.sh@21 -- # val= 00:07:55.036 04:48:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.036 04:48:01 -- accel/accel.sh@20 -- # IFS=: 00:07:55.036 04:48:01 -- accel/accel.sh@20 -- # read -r var val 00:07:55.036 04:48:01 -- accel/accel.sh@21 -- # val= 00:07:55.036 04:48:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.036 04:48:01 -- accel/accel.sh@20 -- # IFS=: 00:07:55.036 04:48:01 -- accel/accel.sh@20 -- # read -r var val 00:07:55.036 04:48:01 -- accel/accel.sh@21 -- # val= 00:07:55.036 04:48:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.036 04:48:01 -- accel/accel.sh@20 -- # IFS=: 00:07:55.036 04:48:01 -- accel/accel.sh@20 -- # read -r var val 00:07:55.036 04:48:01 -- accel/accel.sh@21 -- # val= 00:07:55.036 04:48:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.036 04:48:01 -- accel/accel.sh@20 -- # IFS=: 00:07:55.036 04:48:01 -- accel/accel.sh@20 -- # read -r var val 00:07:55.036 04:48:01 -- accel/accel.sh@21 -- # val= 00:07:55.036 04:48:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.036 04:48:01 -- accel/accel.sh@20 -- # IFS=: 00:07:55.036 04:48:01 -- accel/accel.sh@20 -- # read -r var val 00:07:55.036 04:48:01 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:55.036 04:48:01 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:55.036 04:48:01 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:55.036 00:07:55.036 real 0m4.668s 00:07:55.036 user 0m4.185s 00:07:55.036 sys 0m0.270s 00:07:55.036 04:48:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.036 04:48:01 -- common/autotest_common.sh@10 -- # set +x 00:07:55.036 ************************************ 00:07:55.036 END TEST accel_xor 00:07:55.036 ************************************ 00:07:55.036 04:48:01 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:55.036 04:48:01 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:55.036 04:48:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:55.036 04:48:01 -- common/autotest_common.sh@10 -- # set +x 00:07:55.036 ************************************ 00:07:55.036 START TEST accel_dif_verify 00:07:55.036 ************************************ 00:07:55.036 04:48:01 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:07:55.036 04:48:01 -- accel/accel.sh@16 -- # local accel_opc 00:07:55.036 04:48:01 -- accel/accel.sh@17 -- # local accel_module 00:07:55.036 04:48:01 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:55.036 04:48:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:55.036 04:48:01 -- accel/accel.sh@12 -- # build_accel_config 00:07:55.036 04:48:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:55.036 04:48:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:55.036 04:48:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:55.036 04:48:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:55.036 04:48:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:55.036 04:48:01 -- accel/accel.sh@41 -- # local IFS=, 00:07:55.036 04:48:01 -- accel/accel.sh@42 -- # jq -r . 00:07:55.036 [2024-05-12 04:48:01.959434] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:55.036 [2024-05-12 04:48:01.959609] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60170 ] 00:07:55.036 [2024-05-12 04:48:02.127098] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.295 [2024-05-12 04:48:02.304767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.198 04:48:04 -- accel/accel.sh@18 -- # out=' 00:07:57.198 SPDK Configuration: 00:07:57.198 Core mask: 0x1 00:07:57.198 00:07:57.198 Accel Perf Configuration: 00:07:57.198 Workload Type: dif_verify 00:07:57.198 Vector size: 4096 bytes 00:07:57.198 Transfer size: 4096 bytes 00:07:57.198 Block size: 512 bytes 00:07:57.198 Metadata size: 8 bytes 00:07:57.198 Vector count 1 00:07:57.198 Module: software 00:07:57.198 Queue depth: 32 00:07:57.198 Allocate depth: 32 00:07:57.198 # threads/core: 1 00:07:57.198 Run time: 1 seconds 00:07:57.198 Verify: No 00:07:57.198 00:07:57.198 Running for 1 seconds... 00:07:57.198 00:07:57.198 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:57.198 ------------------------------------------------------------------------------------ 00:07:57.198 0,0 90176/s 357 MiB/s 0 0 00:07:57.198 ==================================================================================== 00:07:57.198 Total 90176/s 352 MiB/s 0 0' 00:07:57.198 04:48:04 -- accel/accel.sh@20 -- # IFS=: 00:07:57.198 04:48:04 -- accel/accel.sh@20 -- # read -r var val 00:07:57.198 04:48:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:57.198 04:48:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:57.198 04:48:04 -- accel/accel.sh@12 -- # build_accel_config 00:07:57.198 04:48:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:57.198 04:48:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:57.198 04:48:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:57.198 04:48:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:57.198 04:48:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:57.198 04:48:04 -- accel/accel.sh@41 -- # local IFS=, 00:07:57.198 04:48:04 -- accel/accel.sh@42 -- # jq -r . 00:07:57.198 [2024-05-12 04:48:04.297918] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:57.198 [2024-05-12 04:48:04.298076] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60196 ] 00:07:57.457 [2024-05-12 04:48:04.466110] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.716 [2024-05-12 04:48:04.636421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.716 04:48:04 -- accel/accel.sh@21 -- # val= 00:07:57.716 04:48:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.716 04:48:04 -- accel/accel.sh@20 -- # IFS=: 00:07:57.716 04:48:04 -- accel/accel.sh@20 -- # read -r var val 00:07:57.716 04:48:04 -- accel/accel.sh@21 -- # val= 00:07:57.716 04:48:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.716 04:48:04 -- accel/accel.sh@20 -- # IFS=: 00:07:57.716 04:48:04 -- accel/accel.sh@20 -- # read -r var val 00:07:57.716 04:48:04 -- accel/accel.sh@21 -- # val=0x1 00:07:57.716 04:48:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.716 04:48:04 -- accel/accel.sh@20 -- # IFS=: 00:07:57.716 04:48:04 -- accel/accel.sh@20 -- # read -r var val 00:07:57.716 04:48:04 -- accel/accel.sh@21 -- # val= 00:07:57.716 04:48:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.716 04:48:04 -- accel/accel.sh@20 -- # IFS=: 00:07:57.716 04:48:04 -- accel/accel.sh@20 -- # read -r var val 00:07:57.716 04:48:04 -- accel/accel.sh@21 -- # val= 00:07:57.716 04:48:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.716 04:48:04 -- accel/accel.sh@20 -- # IFS=: 00:07:57.716 04:48:04 -- accel/accel.sh@20 -- # read -r var val 00:07:57.716 04:48:04 -- accel/accel.sh@21 -- # val=dif_verify 00:07:57.716 04:48:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.716 04:48:04 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:57.716 04:48:04 -- accel/accel.sh@20 -- # IFS=: 00:07:57.716 04:48:04 -- accel/accel.sh@20 -- # read -r var val 00:07:57.716 04:48:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:57.716 04:48:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.716 04:48:04 -- accel/accel.sh@20 -- # IFS=: 00:07:57.716 04:48:04 -- accel/accel.sh@20 -- # read -r var val 00:07:57.716 04:48:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:57.716 04:48:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.716 04:48:04 -- accel/accel.sh@20 -- # IFS=: 00:07:57.716 04:48:04 -- accel/accel.sh@20 -- # read -r var val 00:07:57.716 04:48:04 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:57.716 04:48:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.716 04:48:04 -- accel/accel.sh@20 -- # IFS=: 00:07:57.716 04:48:04 -- accel/accel.sh@20 -- # read -r var val 00:07:57.716 04:48:04 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:57.716 04:48:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.716 04:48:04 -- accel/accel.sh@20 -- # IFS=: 00:07:57.716 04:48:04 -- accel/accel.sh@20 -- # read -r var val 00:07:57.716 04:48:04 -- accel/accel.sh@21 -- # val= 00:07:57.716 04:48:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.716 04:48:04 -- accel/accel.sh@20 -- # IFS=: 00:07:57.716 04:48:04 -- accel/accel.sh@20 -- # read -r var val 00:07:57.716 04:48:04 -- accel/accel.sh@21 -- # val=software 00:07:57.716 04:48:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.716 04:48:04 -- accel/accel.sh@23 -- # accel_module=software 00:07:57.716 04:48:04 -- accel/accel.sh@20 -- # IFS=: 00:07:57.716 04:48:04 -- accel/accel.sh@20 -- # read -r var val 00:07:57.716 04:48:04 -- accel/accel.sh@21 -- # val=32 00:07:57.716 04:48:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.716 04:48:04 -- accel/accel.sh@20 -- # IFS=: 00:07:57.716 04:48:04 -- accel/accel.sh@20 -- # read -r var val 00:07:57.716 04:48:04 -- accel/accel.sh@21 -- # val=32 00:07:57.716 04:48:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.716 04:48:04 -- accel/accel.sh@20 -- # IFS=: 00:07:57.716 04:48:04 -- accel/accel.sh@20 -- # read -r var val 00:07:57.716 04:48:04 -- accel/accel.sh@21 -- # val=1 00:07:57.716 04:48:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.716 04:48:04 -- accel/accel.sh@20 -- # IFS=: 00:07:57.716 04:48:04 -- accel/accel.sh@20 -- # read -r var val 00:07:57.716 04:48:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:57.716 04:48:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.716 04:48:04 -- accel/accel.sh@20 -- # IFS=: 00:07:57.716 04:48:04 -- accel/accel.sh@20 -- # read -r var val 00:07:57.716 04:48:04 -- accel/accel.sh@21 -- # val=No 00:07:57.716 04:48:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.716 04:48:04 -- accel/accel.sh@20 -- # IFS=: 00:07:57.716 04:48:04 -- accel/accel.sh@20 -- # read -r var val 00:07:57.716 04:48:04 -- accel/accel.sh@21 -- # val= 00:07:57.716 04:48:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.716 04:48:04 -- accel/accel.sh@20 -- # IFS=: 00:07:57.716 04:48:04 -- accel/accel.sh@20 -- # read -r var val 00:07:57.716 04:48:04 -- accel/accel.sh@21 -- # val= 00:07:57.716 04:48:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.716 04:48:04 -- accel/accel.sh@20 -- # IFS=: 00:07:57.716 04:48:04 -- accel/accel.sh@20 -- # read -r var val 00:07:59.617 04:48:06 -- accel/accel.sh@21 -- # val= 00:07:59.617 04:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.617 04:48:06 -- accel/accel.sh@20 -- # IFS=: 00:07:59.617 04:48:06 -- accel/accel.sh@20 -- # read -r var val 00:07:59.618 04:48:06 -- accel/accel.sh@21 -- # val= 00:07:59.618 04:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.618 04:48:06 -- accel/accel.sh@20 -- # IFS=: 00:07:59.618 04:48:06 -- accel/accel.sh@20 -- # read -r var val 00:07:59.618 04:48:06 -- accel/accel.sh@21 -- # val= 00:07:59.618 04:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.618 04:48:06 -- accel/accel.sh@20 -- # IFS=: 00:07:59.618 04:48:06 -- accel/accel.sh@20 -- # read -r var val 00:07:59.618 04:48:06 -- accel/accel.sh@21 -- # val= 00:07:59.618 04:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.618 04:48:06 -- accel/accel.sh@20 -- # IFS=: 00:07:59.618 04:48:06 -- accel/accel.sh@20 -- # read -r var val 00:07:59.618 04:48:06 -- accel/accel.sh@21 -- # val= 00:07:59.618 04:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.618 04:48:06 -- accel/accel.sh@20 -- # IFS=: 00:07:59.618 04:48:06 -- accel/accel.sh@20 -- # read -r var val 00:07:59.618 04:48:06 -- accel/accel.sh@21 -- # val= 00:07:59.618 04:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.618 04:48:06 -- accel/accel.sh@20 -- # IFS=: 00:07:59.618 04:48:06 -- accel/accel.sh@20 -- # read -r var val 00:07:59.618 04:48:06 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:59.618 04:48:06 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:59.618 04:48:06 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:59.618 00:07:59.618 real 0m4.657s 00:07:59.618 user 0m4.186s 00:07:59.618 sys 0m0.257s 00:07:59.618 04:48:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.618 04:48:06 -- common/autotest_common.sh@10 -- # set +x 00:07:59.618 ************************************ 00:07:59.618 END TEST accel_dif_verify 00:07:59.618 ************************************ 00:07:59.618 04:48:06 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:59.618 04:48:06 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:59.618 04:48:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:59.618 04:48:06 -- common/autotest_common.sh@10 -- # set +x 00:07:59.618 ************************************ 00:07:59.618 START TEST accel_dif_generate 00:07:59.618 ************************************ 00:07:59.618 04:48:06 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:07:59.618 04:48:06 -- accel/accel.sh@16 -- # local accel_opc 00:07:59.618 04:48:06 -- accel/accel.sh@17 -- # local accel_module 00:07:59.618 04:48:06 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:59.618 04:48:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:59.618 04:48:06 -- accel/accel.sh@12 -- # build_accel_config 00:07:59.618 04:48:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:59.618 04:48:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:59.618 04:48:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:59.618 04:48:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:59.618 04:48:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:59.618 04:48:06 -- accel/accel.sh@41 -- # local IFS=, 00:07:59.618 04:48:06 -- accel/accel.sh@42 -- # jq -r . 00:07:59.618 [2024-05-12 04:48:06.665792] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:59.618 [2024-05-12 04:48:06.665924] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60237 ] 00:07:59.876 [2024-05-12 04:48:06.821861] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.876 [2024-05-12 04:48:06.989458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.410 04:48:08 -- accel/accel.sh@18 -- # out=' 00:08:02.410 SPDK Configuration: 00:08:02.410 Core mask: 0x1 00:08:02.410 00:08:02.410 Accel Perf Configuration: 00:08:02.410 Workload Type: dif_generate 00:08:02.410 Vector size: 4096 bytes 00:08:02.410 Transfer size: 4096 bytes 00:08:02.410 Block size: 512 bytes 00:08:02.410 Metadata size: 8 bytes 00:08:02.410 Vector count 1 00:08:02.410 Module: software 00:08:02.410 Queue depth: 32 00:08:02.410 Allocate depth: 32 00:08:02.410 # threads/core: 1 00:08:02.410 Run time: 1 seconds 00:08:02.410 Verify: No 00:08:02.410 00:08:02.410 Running for 1 seconds... 00:08:02.410 00:08:02.410 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:02.410 ------------------------------------------------------------------------------------ 00:08:02.410 0,0 114880/s 455 MiB/s 0 0 00:08:02.410 ==================================================================================== 00:08:02.410 Total 114880/s 448 MiB/s 0 0' 00:08:02.410 04:48:08 -- accel/accel.sh@20 -- # IFS=: 00:08:02.410 04:48:08 -- accel/accel.sh@20 -- # read -r var val 00:08:02.410 04:48:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:08:02.410 04:48:08 -- accel/accel.sh@12 -- # build_accel_config 00:08:02.410 04:48:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:02.410 04:48:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:02.410 04:48:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:02.410 04:48:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:02.410 04:48:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:02.410 04:48:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:02.410 04:48:08 -- accel/accel.sh@41 -- # local IFS=, 00:08:02.410 04:48:08 -- accel/accel.sh@42 -- # jq -r . 00:08:02.410 [2024-05-12 04:48:08.972512] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:02.410 [2024-05-12 04:48:08.973388] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60269 ] 00:08:02.410 [2024-05-12 04:48:09.142200] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.410 [2024-05-12 04:48:09.318004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.410 04:48:09 -- accel/accel.sh@21 -- # val= 00:08:02.410 04:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.410 04:48:09 -- accel/accel.sh@20 -- # IFS=: 00:08:02.410 04:48:09 -- accel/accel.sh@20 -- # read -r var val 00:08:02.410 04:48:09 -- accel/accel.sh@21 -- # val= 00:08:02.410 04:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.410 04:48:09 -- accel/accel.sh@20 -- # IFS=: 00:08:02.410 04:48:09 -- accel/accel.sh@20 -- # read -r var val 00:08:02.410 04:48:09 -- accel/accel.sh@21 -- # val=0x1 00:08:02.410 04:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.410 04:48:09 -- accel/accel.sh@20 -- # IFS=: 00:08:02.410 04:48:09 -- accel/accel.sh@20 -- # read -r var val 00:08:02.410 04:48:09 -- accel/accel.sh@21 -- # val= 00:08:02.410 04:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.410 04:48:09 -- accel/accel.sh@20 -- # IFS=: 00:08:02.410 04:48:09 -- accel/accel.sh@20 -- # read -r var val 00:08:02.410 04:48:09 -- accel/accel.sh@21 -- # val= 00:08:02.410 04:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.410 04:48:09 -- accel/accel.sh@20 -- # IFS=: 00:08:02.410 04:48:09 -- accel/accel.sh@20 -- # read -r var val 00:08:02.410 04:48:09 -- accel/accel.sh@21 -- # val=dif_generate 00:08:02.410 04:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.410 04:48:09 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:08:02.410 04:48:09 -- accel/accel.sh@20 -- # IFS=: 00:08:02.410 04:48:09 -- accel/accel.sh@20 -- # read -r var val 00:08:02.410 04:48:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:02.410 04:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.410 04:48:09 -- accel/accel.sh@20 -- # IFS=: 00:08:02.410 04:48:09 -- accel/accel.sh@20 -- # read -r var val 00:08:02.410 04:48:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:02.410 04:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.410 04:48:09 -- accel/accel.sh@20 -- # IFS=: 00:08:02.410 04:48:09 -- accel/accel.sh@20 -- # read -r var val 00:08:02.410 04:48:09 -- accel/accel.sh@21 -- # val='512 bytes' 00:08:02.410 04:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.410 04:48:09 -- accel/accel.sh@20 -- # IFS=: 00:08:02.410 04:48:09 -- accel/accel.sh@20 -- # read -r var val 00:08:02.410 04:48:09 -- accel/accel.sh@21 -- # val='8 bytes' 00:08:02.410 04:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.410 04:48:09 -- accel/accel.sh@20 -- # IFS=: 00:08:02.410 04:48:09 -- accel/accel.sh@20 -- # read -r var val 00:08:02.410 04:48:09 -- accel/accel.sh@21 -- # val= 00:08:02.410 04:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.410 04:48:09 -- accel/accel.sh@20 -- # IFS=: 00:08:02.410 04:48:09 -- accel/accel.sh@20 -- # read -r var val 00:08:02.410 04:48:09 -- accel/accel.sh@21 -- # val=software 00:08:02.410 04:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.410 04:48:09 -- accel/accel.sh@23 -- # accel_module=software 00:08:02.410 04:48:09 -- accel/accel.sh@20 -- # IFS=: 00:08:02.410 04:48:09 -- accel/accel.sh@20 -- # read -r var val 00:08:02.410 04:48:09 -- accel/accel.sh@21 -- # val=32 00:08:02.410 04:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.410 04:48:09 -- accel/accel.sh@20 -- # IFS=: 00:08:02.410 04:48:09 -- accel/accel.sh@20 -- # read -r var val 00:08:02.410 04:48:09 -- accel/accel.sh@21 -- # val=32 00:08:02.410 04:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.410 04:48:09 -- accel/accel.sh@20 -- # IFS=: 00:08:02.410 04:48:09 -- accel/accel.sh@20 -- # read -r var val 00:08:02.410 04:48:09 -- accel/accel.sh@21 -- # val=1 00:08:02.410 04:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.410 04:48:09 -- accel/accel.sh@20 -- # IFS=: 00:08:02.410 04:48:09 -- accel/accel.sh@20 -- # read -r var val 00:08:02.410 04:48:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:02.410 04:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.410 04:48:09 -- accel/accel.sh@20 -- # IFS=: 00:08:02.410 04:48:09 -- accel/accel.sh@20 -- # read -r var val 00:08:02.410 04:48:09 -- accel/accel.sh@21 -- # val=No 00:08:02.410 04:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.410 04:48:09 -- accel/accel.sh@20 -- # IFS=: 00:08:02.410 04:48:09 -- accel/accel.sh@20 -- # read -r var val 00:08:02.410 04:48:09 -- accel/accel.sh@21 -- # val= 00:08:02.410 04:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.410 04:48:09 -- accel/accel.sh@20 -- # IFS=: 00:08:02.410 04:48:09 -- accel/accel.sh@20 -- # read -r var val 00:08:02.410 04:48:09 -- accel/accel.sh@21 -- # val= 00:08:02.410 04:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.410 04:48:09 -- accel/accel.sh@20 -- # IFS=: 00:08:02.410 04:48:09 -- accel/accel.sh@20 -- # read -r var val 00:08:04.315 04:48:11 -- accel/accel.sh@21 -- # val= 00:08:04.315 04:48:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.315 04:48:11 -- accel/accel.sh@20 -- # IFS=: 00:08:04.315 04:48:11 -- accel/accel.sh@20 -- # read -r var val 00:08:04.315 04:48:11 -- accel/accel.sh@21 -- # val= 00:08:04.315 04:48:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.315 04:48:11 -- accel/accel.sh@20 -- # IFS=: 00:08:04.315 04:48:11 -- accel/accel.sh@20 -- # read -r var val 00:08:04.315 04:48:11 -- accel/accel.sh@21 -- # val= 00:08:04.315 04:48:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.315 04:48:11 -- accel/accel.sh@20 -- # IFS=: 00:08:04.315 04:48:11 -- accel/accel.sh@20 -- # read -r var val 00:08:04.315 04:48:11 -- accel/accel.sh@21 -- # val= 00:08:04.315 04:48:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.315 04:48:11 -- accel/accel.sh@20 -- # IFS=: 00:08:04.315 04:48:11 -- accel/accel.sh@20 -- # read -r var val 00:08:04.315 04:48:11 -- accel/accel.sh@21 -- # val= 00:08:04.315 04:48:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.315 04:48:11 -- accel/accel.sh@20 -- # IFS=: 00:08:04.315 04:48:11 -- accel/accel.sh@20 -- # read -r var val 00:08:04.315 04:48:11 -- accel/accel.sh@21 -- # val= 00:08:04.315 04:48:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.315 04:48:11 -- accel/accel.sh@20 -- # IFS=: 00:08:04.315 04:48:11 -- accel/accel.sh@20 -- # read -r var val 00:08:04.315 04:48:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:04.315 04:48:11 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:08:04.315 04:48:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:04.315 00:08:04.315 real 0m4.560s 00:08:04.315 user 0m4.088s 00:08:04.315 sys 0m0.260s 00:08:04.315 04:48:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.315 04:48:11 -- common/autotest_common.sh@10 -- # set +x 00:08:04.315 ************************************ 00:08:04.315 END TEST accel_dif_generate 00:08:04.315 ************************************ 00:08:04.315 04:48:11 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:08:04.315 04:48:11 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:08:04.315 04:48:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:04.315 04:48:11 -- common/autotest_common.sh@10 -- # set +x 00:08:04.315 ************************************ 00:08:04.315 START TEST accel_dif_generate_copy 00:08:04.315 ************************************ 00:08:04.315 04:48:11 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:08:04.315 04:48:11 -- accel/accel.sh@16 -- # local accel_opc 00:08:04.315 04:48:11 -- accel/accel.sh@17 -- # local accel_module 00:08:04.315 04:48:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:08:04.315 04:48:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:04.315 04:48:11 -- accel/accel.sh@12 -- # build_accel_config 00:08:04.315 04:48:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:04.315 04:48:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:04.315 04:48:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:04.315 04:48:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:04.315 04:48:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:04.315 04:48:11 -- accel/accel.sh@41 -- # local IFS=, 00:08:04.315 04:48:11 -- accel/accel.sh@42 -- # jq -r . 00:08:04.315 [2024-05-12 04:48:11.290136] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:04.315 [2024-05-12 04:48:11.290508] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60310 ] 00:08:04.573 [2024-05-12 04:48:11.459454] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.573 [2024-05-12 04:48:11.628647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.473 04:48:13 -- accel/accel.sh@18 -- # out=' 00:08:06.473 SPDK Configuration: 00:08:06.473 Core mask: 0x1 00:08:06.473 00:08:06.473 Accel Perf Configuration: 00:08:06.473 Workload Type: dif_generate_copy 00:08:06.473 Vector size: 4096 bytes 00:08:06.473 Transfer size: 4096 bytes 00:08:06.473 Vector count 1 00:08:06.473 Module: software 00:08:06.473 Queue depth: 32 00:08:06.473 Allocate depth: 32 00:08:06.473 # threads/core: 1 00:08:06.473 Run time: 1 seconds 00:08:06.473 Verify: No 00:08:06.473 00:08:06.473 Running for 1 seconds... 00:08:06.473 00:08:06.473 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:06.473 ------------------------------------------------------------------------------------ 00:08:06.473 0,0 89024/s 353 MiB/s 0 0 00:08:06.473 ==================================================================================== 00:08:06.473 Total 89024/s 347 MiB/s 0 0' 00:08:06.473 04:48:13 -- accel/accel.sh@20 -- # IFS=: 00:08:06.473 04:48:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:08:06.473 04:48:13 -- accel/accel.sh@20 -- # read -r var val 00:08:06.473 04:48:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:06.473 04:48:13 -- accel/accel.sh@12 -- # build_accel_config 00:08:06.473 04:48:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:06.473 04:48:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:06.473 04:48:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:06.473 04:48:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:06.473 04:48:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:06.473 04:48:13 -- accel/accel.sh@41 -- # local IFS=, 00:08:06.473 04:48:13 -- accel/accel.sh@42 -- # jq -r . 00:08:06.473 [2024-05-12 04:48:13.570127] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:06.473 [2024-05-12 04:48:13.570322] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60341 ] 00:08:06.769 [2024-05-12 04:48:13.737487] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.028 [2024-05-12 04:48:13.911704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.028 04:48:14 -- accel/accel.sh@21 -- # val= 00:08:07.028 04:48:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.028 04:48:14 -- accel/accel.sh@20 -- # IFS=: 00:08:07.028 04:48:14 -- accel/accel.sh@20 -- # read -r var val 00:08:07.028 04:48:14 -- accel/accel.sh@21 -- # val= 00:08:07.028 04:48:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.028 04:48:14 -- accel/accel.sh@20 -- # IFS=: 00:08:07.028 04:48:14 -- accel/accel.sh@20 -- # read -r var val 00:08:07.028 04:48:14 -- accel/accel.sh@21 -- # val=0x1 00:08:07.028 04:48:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.028 04:48:14 -- accel/accel.sh@20 -- # IFS=: 00:08:07.028 04:48:14 -- accel/accel.sh@20 -- # read -r var val 00:08:07.028 04:48:14 -- accel/accel.sh@21 -- # val= 00:08:07.028 04:48:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.028 04:48:14 -- accel/accel.sh@20 -- # IFS=: 00:08:07.028 04:48:14 -- accel/accel.sh@20 -- # read -r var val 00:08:07.028 04:48:14 -- accel/accel.sh@21 -- # val= 00:08:07.028 04:48:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.028 04:48:14 -- accel/accel.sh@20 -- # IFS=: 00:08:07.028 04:48:14 -- accel/accel.sh@20 -- # read -r var val 00:08:07.028 04:48:14 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:08:07.028 04:48:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.028 04:48:14 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:08:07.028 04:48:14 -- accel/accel.sh@20 -- # IFS=: 00:08:07.028 04:48:14 -- accel/accel.sh@20 -- # read -r var val 00:08:07.028 04:48:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:07.028 04:48:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.028 04:48:14 -- accel/accel.sh@20 -- # IFS=: 00:08:07.028 04:48:14 -- accel/accel.sh@20 -- # read -r var val 00:08:07.028 04:48:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:07.028 04:48:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.028 04:48:14 -- accel/accel.sh@20 -- # IFS=: 00:08:07.028 04:48:14 -- accel/accel.sh@20 -- # read -r var val 00:08:07.028 04:48:14 -- accel/accel.sh@21 -- # val= 00:08:07.028 04:48:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.028 04:48:14 -- accel/accel.sh@20 -- # IFS=: 00:08:07.028 04:48:14 -- accel/accel.sh@20 -- # read -r var val 00:08:07.028 04:48:14 -- accel/accel.sh@21 -- # val=software 00:08:07.028 04:48:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.028 04:48:14 -- accel/accel.sh@23 -- # accel_module=software 00:08:07.028 04:48:14 -- accel/accel.sh@20 -- # IFS=: 00:08:07.028 04:48:14 -- accel/accel.sh@20 -- # read -r var val 00:08:07.028 04:48:14 -- accel/accel.sh@21 -- # val=32 00:08:07.028 04:48:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.028 04:48:14 -- accel/accel.sh@20 -- # IFS=: 00:08:07.028 04:48:14 -- accel/accel.sh@20 -- # read -r var val 00:08:07.028 04:48:14 -- accel/accel.sh@21 -- # val=32 00:08:07.028 04:48:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.028 04:48:14 -- accel/accel.sh@20 -- # IFS=: 00:08:07.028 04:48:14 -- accel/accel.sh@20 -- # read -r var val 00:08:07.028 04:48:14 -- accel/accel.sh@21 -- # val=1 00:08:07.028 04:48:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.028 04:48:14 -- accel/accel.sh@20 -- # IFS=: 00:08:07.028 04:48:14 -- accel/accel.sh@20 -- # read -r var val 00:08:07.028 04:48:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:07.028 04:48:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.028 04:48:14 -- accel/accel.sh@20 -- # IFS=: 00:08:07.028 04:48:14 -- accel/accel.sh@20 -- # read -r var val 00:08:07.028 04:48:14 -- accel/accel.sh@21 -- # val=No 00:08:07.028 04:48:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.028 04:48:14 -- accel/accel.sh@20 -- # IFS=: 00:08:07.028 04:48:14 -- accel/accel.sh@20 -- # read -r var val 00:08:07.028 04:48:14 -- accel/accel.sh@21 -- # val= 00:08:07.028 04:48:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.028 04:48:14 -- accel/accel.sh@20 -- # IFS=: 00:08:07.028 04:48:14 -- accel/accel.sh@20 -- # read -r var val 00:08:07.029 04:48:14 -- accel/accel.sh@21 -- # val= 00:08:07.029 04:48:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.029 04:48:14 -- accel/accel.sh@20 -- # IFS=: 00:08:07.029 04:48:14 -- accel/accel.sh@20 -- # read -r var val 00:08:08.934 04:48:15 -- accel/accel.sh@21 -- # val= 00:08:08.934 04:48:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.934 04:48:15 -- accel/accel.sh@20 -- # IFS=: 00:08:08.934 04:48:15 -- accel/accel.sh@20 -- # read -r var val 00:08:08.934 04:48:15 -- accel/accel.sh@21 -- # val= 00:08:08.934 04:48:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.934 04:48:15 -- accel/accel.sh@20 -- # IFS=: 00:08:08.934 04:48:15 -- accel/accel.sh@20 -- # read -r var val 00:08:08.934 04:48:15 -- accel/accel.sh@21 -- # val= 00:08:08.934 04:48:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.934 04:48:15 -- accel/accel.sh@20 -- # IFS=: 00:08:08.934 04:48:15 -- accel/accel.sh@20 -- # read -r var val 00:08:08.934 04:48:15 -- accel/accel.sh@21 -- # val= 00:08:08.934 04:48:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.934 04:48:15 -- accel/accel.sh@20 -- # IFS=: 00:08:08.934 04:48:15 -- accel/accel.sh@20 -- # read -r var val 00:08:08.934 04:48:15 -- accel/accel.sh@21 -- # val= 00:08:08.934 04:48:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.934 04:48:15 -- accel/accel.sh@20 -- # IFS=: 00:08:08.934 04:48:15 -- accel/accel.sh@20 -- # read -r var val 00:08:08.934 04:48:15 -- accel/accel.sh@21 -- # val= 00:08:08.934 04:48:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.934 04:48:15 -- accel/accel.sh@20 -- # IFS=: 00:08:08.934 04:48:15 -- accel/accel.sh@20 -- # read -r var val 00:08:08.934 04:48:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:08.934 04:48:15 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:08:08.934 04:48:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:08.934 00:08:08.934 real 0m4.589s 00:08:08.934 user 0m4.107s 00:08:08.934 sys 0m0.274s 00:08:08.934 04:48:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.934 ************************************ 00:08:08.934 END TEST accel_dif_generate_copy 00:08:08.934 ************************************ 00:08:08.934 04:48:15 -- common/autotest_common.sh@10 -- # set +x 00:08:08.934 04:48:15 -- accel/accel.sh@107 -- # [[ y == y ]] 00:08:08.934 04:48:15 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:08.934 04:48:15 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:08:08.934 04:48:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:08.934 04:48:15 -- common/autotest_common.sh@10 -- # set +x 00:08:08.934 ************************************ 00:08:08.934 START TEST accel_comp 00:08:08.934 ************************************ 00:08:08.934 04:48:15 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:08.934 04:48:15 -- accel/accel.sh@16 -- # local accel_opc 00:08:08.934 04:48:15 -- accel/accel.sh@17 -- # local accel_module 00:08:08.934 04:48:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:08.934 04:48:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:08.934 04:48:15 -- accel/accel.sh@12 -- # build_accel_config 00:08:08.934 04:48:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:08.934 04:48:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:08.934 04:48:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:08.934 04:48:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:08.934 04:48:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:08.934 04:48:15 -- accel/accel.sh@41 -- # local IFS=, 00:08:08.934 04:48:15 -- accel/accel.sh@42 -- # jq -r . 00:08:08.934 [2024-05-12 04:48:15.928216] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:08.934 [2024-05-12 04:48:15.928343] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60388 ] 00:08:09.193 [2024-05-12 04:48:16.091442] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.193 [2024-05-12 04:48:16.297203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.727 04:48:18 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:11.727 00:08:11.727 SPDK Configuration: 00:08:11.727 Core mask: 0x1 00:08:11.727 00:08:11.727 Accel Perf Configuration: 00:08:11.727 Workload Type: compress 00:08:11.727 Transfer size: 4096 bytes 00:08:11.727 Vector count 1 00:08:11.727 Module: software 00:08:11.727 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:11.727 Queue depth: 32 00:08:11.727 Allocate depth: 32 00:08:11.727 # threads/core: 1 00:08:11.727 Run time: 1 seconds 00:08:11.727 Verify: No 00:08:11.727 00:08:11.727 Running for 1 seconds... 00:08:11.727 00:08:11.727 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:11.727 ------------------------------------------------------------------------------------ 00:08:11.727 0,0 40448/s 168 MiB/s 0 0 00:08:11.727 ==================================================================================== 00:08:11.727 Total 40448/s 158 MiB/s 0 0' 00:08:11.727 04:48:18 -- accel/accel.sh@20 -- # IFS=: 00:08:11.727 04:48:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:11.727 04:48:18 -- accel/accel.sh@20 -- # read -r var val 00:08:11.727 04:48:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:11.727 04:48:18 -- accel/accel.sh@12 -- # build_accel_config 00:08:11.727 04:48:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:11.727 04:48:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:11.727 04:48:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:11.727 04:48:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:11.727 04:48:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:11.727 04:48:18 -- accel/accel.sh@41 -- # local IFS=, 00:08:11.727 04:48:18 -- accel/accel.sh@42 -- # jq -r . 00:08:11.727 [2024-05-12 04:48:18.440971] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:11.727 [2024-05-12 04:48:18.441122] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60414 ] 00:08:11.727 [2024-05-12 04:48:18.607909] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.727 [2024-05-12 04:48:18.811384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.987 04:48:19 -- accel/accel.sh@21 -- # val= 00:08:11.987 04:48:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.987 04:48:19 -- accel/accel.sh@20 -- # IFS=: 00:08:11.987 04:48:19 -- accel/accel.sh@20 -- # read -r var val 00:08:11.987 04:48:19 -- accel/accel.sh@21 -- # val= 00:08:11.987 04:48:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.987 04:48:19 -- accel/accel.sh@20 -- # IFS=: 00:08:11.987 04:48:19 -- accel/accel.sh@20 -- # read -r var val 00:08:11.987 04:48:19 -- accel/accel.sh@21 -- # val= 00:08:11.987 04:48:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.987 04:48:19 -- accel/accel.sh@20 -- # IFS=: 00:08:11.987 04:48:19 -- accel/accel.sh@20 -- # read -r var val 00:08:11.987 04:48:19 -- accel/accel.sh@21 -- # val=0x1 00:08:11.987 04:48:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.987 04:48:19 -- accel/accel.sh@20 -- # IFS=: 00:08:11.987 04:48:19 -- accel/accel.sh@20 -- # read -r var val 00:08:11.987 04:48:19 -- accel/accel.sh@21 -- # val= 00:08:11.987 04:48:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.987 04:48:19 -- accel/accel.sh@20 -- # IFS=: 00:08:11.987 04:48:19 -- accel/accel.sh@20 -- # read -r var val 00:08:11.987 04:48:19 -- accel/accel.sh@21 -- # val= 00:08:11.987 04:48:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.987 04:48:19 -- accel/accel.sh@20 -- # IFS=: 00:08:11.987 04:48:19 -- accel/accel.sh@20 -- # read -r var val 00:08:11.987 04:48:19 -- accel/accel.sh@21 -- # val=compress 00:08:11.987 04:48:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.987 04:48:19 -- accel/accel.sh@24 -- # accel_opc=compress 00:08:11.987 04:48:19 -- accel/accel.sh@20 -- # IFS=: 00:08:11.987 04:48:19 -- accel/accel.sh@20 -- # read -r var val 00:08:11.987 04:48:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:11.987 04:48:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.987 04:48:19 -- accel/accel.sh@20 -- # IFS=: 00:08:11.987 04:48:19 -- accel/accel.sh@20 -- # read -r var val 00:08:11.987 04:48:19 -- accel/accel.sh@21 -- # val= 00:08:11.987 04:48:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.987 04:48:19 -- accel/accel.sh@20 -- # IFS=: 00:08:11.987 04:48:19 -- accel/accel.sh@20 -- # read -r var val 00:08:11.987 04:48:19 -- accel/accel.sh@21 -- # val=software 00:08:11.987 04:48:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.987 04:48:19 -- accel/accel.sh@23 -- # accel_module=software 00:08:11.987 04:48:19 -- accel/accel.sh@20 -- # IFS=: 00:08:11.987 04:48:19 -- accel/accel.sh@20 -- # read -r var val 00:08:11.987 04:48:19 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:11.987 04:48:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.987 04:48:19 -- accel/accel.sh@20 -- # IFS=: 00:08:11.987 04:48:19 -- accel/accel.sh@20 -- # read -r var val 00:08:11.987 04:48:19 -- accel/accel.sh@21 -- # val=32 00:08:11.987 04:48:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.987 04:48:19 -- accel/accel.sh@20 -- # IFS=: 00:08:11.987 04:48:19 -- accel/accel.sh@20 -- # read -r var val 00:08:11.987 04:48:19 -- accel/accel.sh@21 -- # val=32 00:08:11.987 04:48:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.987 04:48:19 -- accel/accel.sh@20 -- # IFS=: 00:08:11.987 04:48:19 -- accel/accel.sh@20 -- # read -r var val 00:08:11.987 04:48:19 -- accel/accel.sh@21 -- # val=1 00:08:11.987 04:48:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.987 04:48:19 -- accel/accel.sh@20 -- # IFS=: 00:08:11.987 04:48:19 -- accel/accel.sh@20 -- # read -r var val 00:08:11.987 04:48:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:11.987 04:48:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.987 04:48:19 -- accel/accel.sh@20 -- # IFS=: 00:08:11.987 04:48:19 -- accel/accel.sh@20 -- # read -r var val 00:08:11.987 04:48:19 -- accel/accel.sh@21 -- # val=No 00:08:11.987 04:48:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.987 04:48:19 -- accel/accel.sh@20 -- # IFS=: 00:08:11.987 04:48:19 -- accel/accel.sh@20 -- # read -r var val 00:08:11.987 04:48:19 -- accel/accel.sh@21 -- # val= 00:08:11.987 04:48:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.987 04:48:19 -- accel/accel.sh@20 -- # IFS=: 00:08:11.987 04:48:19 -- accel/accel.sh@20 -- # read -r var val 00:08:11.987 04:48:19 -- accel/accel.sh@21 -- # val= 00:08:11.987 04:48:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.987 04:48:19 -- accel/accel.sh@20 -- # IFS=: 00:08:11.987 04:48:19 -- accel/accel.sh@20 -- # read -r var val 00:08:13.892 04:48:20 -- accel/accel.sh@21 -- # val= 00:08:13.892 04:48:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.892 04:48:20 -- accel/accel.sh@20 -- # IFS=: 00:08:13.892 04:48:20 -- accel/accel.sh@20 -- # read -r var val 00:08:13.892 04:48:20 -- accel/accel.sh@21 -- # val= 00:08:13.892 04:48:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.892 04:48:20 -- accel/accel.sh@20 -- # IFS=: 00:08:13.892 04:48:20 -- accel/accel.sh@20 -- # read -r var val 00:08:13.892 04:48:20 -- accel/accel.sh@21 -- # val= 00:08:13.892 04:48:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.892 04:48:20 -- accel/accel.sh@20 -- # IFS=: 00:08:13.892 04:48:20 -- accel/accel.sh@20 -- # read -r var val 00:08:13.892 04:48:20 -- accel/accel.sh@21 -- # val= 00:08:13.892 04:48:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.892 04:48:20 -- accel/accel.sh@20 -- # IFS=: 00:08:13.892 04:48:20 -- accel/accel.sh@20 -- # read -r var val 00:08:13.892 04:48:20 -- accel/accel.sh@21 -- # val= 00:08:13.892 04:48:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.892 04:48:20 -- accel/accel.sh@20 -- # IFS=: 00:08:13.892 04:48:20 -- accel/accel.sh@20 -- # read -r var val 00:08:13.892 04:48:20 -- accel/accel.sh@21 -- # val= 00:08:13.892 04:48:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.892 04:48:20 -- accel/accel.sh@20 -- # IFS=: 00:08:13.892 04:48:20 -- accel/accel.sh@20 -- # read -r var val 00:08:13.892 04:48:20 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:13.892 04:48:20 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:08:13.892 04:48:20 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:13.892 00:08:13.892 real 0m5.027s 00:08:13.892 user 0m4.500s 00:08:13.892 sys 0m0.310s 00:08:13.892 04:48:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.892 ************************************ 00:08:13.892 END TEST accel_comp 00:08:13.892 ************************************ 00:08:13.892 04:48:20 -- common/autotest_common.sh@10 -- # set +x 00:08:13.892 04:48:20 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:13.892 04:48:20 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:08:13.892 04:48:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:13.892 04:48:20 -- common/autotest_common.sh@10 -- # set +x 00:08:13.892 ************************************ 00:08:13.892 START TEST accel_decomp 00:08:13.892 ************************************ 00:08:13.892 04:48:20 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:13.892 04:48:20 -- accel/accel.sh@16 -- # local accel_opc 00:08:13.892 04:48:20 -- accel/accel.sh@17 -- # local accel_module 00:08:13.892 04:48:20 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:13.892 04:48:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:13.892 04:48:20 -- accel/accel.sh@12 -- # build_accel_config 00:08:13.892 04:48:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:13.892 04:48:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:13.892 04:48:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:13.892 04:48:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:13.892 04:48:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:13.892 04:48:20 -- accel/accel.sh@41 -- # local IFS=, 00:08:13.892 04:48:20 -- accel/accel.sh@42 -- # jq -r . 00:08:13.892 [2024-05-12 04:48:20.996475] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:13.892 [2024-05-12 04:48:20.996646] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60466 ] 00:08:14.152 [2024-05-12 04:48:21.164851] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.411 [2024-05-12 04:48:21.375409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.957 04:48:23 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:16.957 00:08:16.957 SPDK Configuration: 00:08:16.957 Core mask: 0x1 00:08:16.957 00:08:16.957 Accel Perf Configuration: 00:08:16.957 Workload Type: decompress 00:08:16.957 Transfer size: 4096 bytes 00:08:16.957 Vector count 1 00:08:16.957 Module: software 00:08:16.957 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:16.957 Queue depth: 32 00:08:16.957 Allocate depth: 32 00:08:16.957 # threads/core: 1 00:08:16.957 Run time: 1 seconds 00:08:16.957 Verify: Yes 00:08:16.957 00:08:16.957 Running for 1 seconds... 00:08:16.957 00:08:16.957 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:16.957 ------------------------------------------------------------------------------------ 00:08:16.957 0,0 53120/s 97 MiB/s 0 0 00:08:16.957 ==================================================================================== 00:08:16.957 Total 53120/s 207 MiB/s 0 0' 00:08:16.957 04:48:23 -- accel/accel.sh@20 -- # IFS=: 00:08:16.957 04:48:23 -- accel/accel.sh@20 -- # read -r var val 00:08:16.957 04:48:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:16.957 04:48:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:16.957 04:48:23 -- accel/accel.sh@12 -- # build_accel_config 00:08:16.957 04:48:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:16.957 04:48:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:16.957 04:48:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:16.957 04:48:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:16.957 04:48:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:16.957 04:48:23 -- accel/accel.sh@41 -- # local IFS=, 00:08:16.957 04:48:23 -- accel/accel.sh@42 -- # jq -r . 00:08:16.957 [2024-05-12 04:48:23.521447] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:16.957 [2024-05-12 04:48:23.521605] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60492 ] 00:08:16.957 [2024-05-12 04:48:23.695526] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.957 [2024-05-12 04:48:23.895503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.219 04:48:24 -- accel/accel.sh@21 -- # val= 00:08:17.219 04:48:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.219 04:48:24 -- accel/accel.sh@20 -- # IFS=: 00:08:17.219 04:48:24 -- accel/accel.sh@20 -- # read -r var val 00:08:17.219 04:48:24 -- accel/accel.sh@21 -- # val= 00:08:17.219 04:48:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.219 04:48:24 -- accel/accel.sh@20 -- # IFS=: 00:08:17.219 04:48:24 -- accel/accel.sh@20 -- # read -r var val 00:08:17.219 04:48:24 -- accel/accel.sh@21 -- # val= 00:08:17.219 04:48:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.219 04:48:24 -- accel/accel.sh@20 -- # IFS=: 00:08:17.219 04:48:24 -- accel/accel.sh@20 -- # read -r var val 00:08:17.219 04:48:24 -- accel/accel.sh@21 -- # val=0x1 00:08:17.219 04:48:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.219 04:48:24 -- accel/accel.sh@20 -- # IFS=: 00:08:17.219 04:48:24 -- accel/accel.sh@20 -- # read -r var val 00:08:17.219 04:48:24 -- accel/accel.sh@21 -- # val= 00:08:17.219 04:48:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.219 04:48:24 -- accel/accel.sh@20 -- # IFS=: 00:08:17.219 04:48:24 -- accel/accel.sh@20 -- # read -r var val 00:08:17.219 04:48:24 -- accel/accel.sh@21 -- # val= 00:08:17.219 04:48:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.219 04:48:24 -- accel/accel.sh@20 -- # IFS=: 00:08:17.219 04:48:24 -- accel/accel.sh@20 -- # read -r var val 00:08:17.219 04:48:24 -- accel/accel.sh@21 -- # val=decompress 00:08:17.219 04:48:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.219 04:48:24 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:17.219 04:48:24 -- accel/accel.sh@20 -- # IFS=: 00:08:17.219 04:48:24 -- accel/accel.sh@20 -- # read -r var val 00:08:17.219 04:48:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:17.219 04:48:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.219 04:48:24 -- accel/accel.sh@20 -- # IFS=: 00:08:17.219 04:48:24 -- accel/accel.sh@20 -- # read -r var val 00:08:17.219 04:48:24 -- accel/accel.sh@21 -- # val= 00:08:17.219 04:48:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.219 04:48:24 -- accel/accel.sh@20 -- # IFS=: 00:08:17.219 04:48:24 -- accel/accel.sh@20 -- # read -r var val 00:08:17.219 04:48:24 -- accel/accel.sh@21 -- # val=software 00:08:17.219 04:48:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.219 04:48:24 -- accel/accel.sh@23 -- # accel_module=software 00:08:17.219 04:48:24 -- accel/accel.sh@20 -- # IFS=: 00:08:17.219 04:48:24 -- accel/accel.sh@20 -- # read -r var val 00:08:17.219 04:48:24 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:17.219 04:48:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.219 04:48:24 -- accel/accel.sh@20 -- # IFS=: 00:08:17.219 04:48:24 -- accel/accel.sh@20 -- # read -r var val 00:08:17.219 04:48:24 -- accel/accel.sh@21 -- # val=32 00:08:17.219 04:48:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.219 04:48:24 -- accel/accel.sh@20 -- # IFS=: 00:08:17.219 04:48:24 -- accel/accel.sh@20 -- # read -r var val 00:08:17.219 04:48:24 -- accel/accel.sh@21 -- # val=32 00:08:17.219 04:48:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.219 04:48:24 -- accel/accel.sh@20 -- # IFS=: 00:08:17.219 04:48:24 -- accel/accel.sh@20 -- # read -r var val 00:08:17.219 04:48:24 -- accel/accel.sh@21 -- # val=1 00:08:17.219 04:48:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.219 04:48:24 -- accel/accel.sh@20 -- # IFS=: 00:08:17.219 04:48:24 -- accel/accel.sh@20 -- # read -r var val 00:08:17.219 04:48:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:17.219 04:48:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.219 04:48:24 -- accel/accel.sh@20 -- # IFS=: 00:08:17.219 04:48:24 -- accel/accel.sh@20 -- # read -r var val 00:08:17.219 04:48:24 -- accel/accel.sh@21 -- # val=Yes 00:08:17.219 04:48:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.219 04:48:24 -- accel/accel.sh@20 -- # IFS=: 00:08:17.219 04:48:24 -- accel/accel.sh@20 -- # read -r var val 00:08:17.219 04:48:24 -- accel/accel.sh@21 -- # val= 00:08:17.219 04:48:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.219 04:48:24 -- accel/accel.sh@20 -- # IFS=: 00:08:17.219 04:48:24 -- accel/accel.sh@20 -- # read -r var val 00:08:17.219 04:48:24 -- accel/accel.sh@21 -- # val= 00:08:17.219 04:48:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.219 04:48:24 -- accel/accel.sh@20 -- # IFS=: 00:08:17.219 04:48:24 -- accel/accel.sh@20 -- # read -r var val 00:08:19.123 04:48:25 -- accel/accel.sh@21 -- # val= 00:08:19.123 04:48:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.123 04:48:25 -- accel/accel.sh@20 -- # IFS=: 00:08:19.123 04:48:25 -- accel/accel.sh@20 -- # read -r var val 00:08:19.123 04:48:25 -- accel/accel.sh@21 -- # val= 00:08:19.123 04:48:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.123 04:48:25 -- accel/accel.sh@20 -- # IFS=: 00:08:19.123 04:48:25 -- accel/accel.sh@20 -- # read -r var val 00:08:19.123 04:48:25 -- accel/accel.sh@21 -- # val= 00:08:19.123 04:48:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.123 04:48:25 -- accel/accel.sh@20 -- # IFS=: 00:08:19.123 04:48:25 -- accel/accel.sh@20 -- # read -r var val 00:08:19.123 04:48:25 -- accel/accel.sh@21 -- # val= 00:08:19.123 04:48:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.123 04:48:25 -- accel/accel.sh@20 -- # IFS=: 00:08:19.123 04:48:25 -- accel/accel.sh@20 -- # read -r var val 00:08:19.123 04:48:25 -- accel/accel.sh@21 -- # val= 00:08:19.123 04:48:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.123 04:48:25 -- accel/accel.sh@20 -- # IFS=: 00:08:19.123 04:48:25 -- accel/accel.sh@20 -- # read -r var val 00:08:19.123 04:48:25 -- accel/accel.sh@21 -- # val= 00:08:19.123 04:48:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.123 04:48:25 -- accel/accel.sh@20 -- # IFS=: 00:08:19.123 04:48:25 -- accel/accel.sh@20 -- # read -r var val 00:08:19.123 04:48:25 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:19.123 04:48:25 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:19.123 04:48:25 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:19.123 00:08:19.123 real 0m5.005s 00:08:19.123 user 0m4.475s 00:08:19.123 sys 0m0.315s 00:08:19.123 04:48:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.123 ************************************ 00:08:19.123 END TEST accel_decomp 00:08:19.123 ************************************ 00:08:19.123 04:48:25 -- common/autotest_common.sh@10 -- # set +x 00:08:19.123 04:48:25 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:19.123 04:48:25 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:08:19.123 04:48:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:19.123 04:48:25 -- common/autotest_common.sh@10 -- # set +x 00:08:19.123 ************************************ 00:08:19.123 START TEST accel_decmop_full 00:08:19.123 ************************************ 00:08:19.123 04:48:26 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:19.123 04:48:26 -- accel/accel.sh@16 -- # local accel_opc 00:08:19.123 04:48:26 -- accel/accel.sh@17 -- # local accel_module 00:08:19.123 04:48:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:19.123 04:48:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:19.123 04:48:26 -- accel/accel.sh@12 -- # build_accel_config 00:08:19.123 04:48:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:19.123 04:48:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:19.123 04:48:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:19.123 04:48:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:19.123 04:48:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:19.123 04:48:26 -- accel/accel.sh@41 -- # local IFS=, 00:08:19.123 04:48:26 -- accel/accel.sh@42 -- # jq -r . 00:08:19.123 [2024-05-12 04:48:26.054306] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:19.123 [2024-05-12 04:48:26.054471] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60539 ] 00:08:19.123 [2024-05-12 04:48:26.227361] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.382 [2024-05-12 04:48:26.452092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.296 04:48:28 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:21.296 00:08:21.296 SPDK Configuration: 00:08:21.296 Core mask: 0x1 00:08:21.296 00:08:21.296 Accel Perf Configuration: 00:08:21.296 Workload Type: decompress 00:08:21.296 Transfer size: 111250 bytes 00:08:21.296 Vector count 1 00:08:21.296 Module: software 00:08:21.296 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:21.296 Queue depth: 32 00:08:21.296 Allocate depth: 32 00:08:21.296 # threads/core: 1 00:08:21.296 Run time: 1 seconds 00:08:21.296 Verify: Yes 00:08:21.296 00:08:21.296 Running for 1 seconds... 00:08:21.296 00:08:21.296 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:21.296 ------------------------------------------------------------------------------------ 00:08:21.296 0,0 4800/s 198 MiB/s 0 0 00:08:21.296 ==================================================================================== 00:08:21.296 Total 4800/s 509 MiB/s 0 0' 00:08:21.296 04:48:28 -- accel/accel.sh@20 -- # IFS=: 00:08:21.296 04:48:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:21.296 04:48:28 -- accel/accel.sh@20 -- # read -r var val 00:08:21.296 04:48:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:21.296 04:48:28 -- accel/accel.sh@12 -- # build_accel_config 00:08:21.296 04:48:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:21.296 04:48:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:21.296 04:48:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:21.296 04:48:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:21.296 04:48:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:21.296 04:48:28 -- accel/accel.sh@41 -- # local IFS=, 00:08:21.296 04:48:28 -- accel/accel.sh@42 -- # jq -r . 00:08:21.296 [2024-05-12 04:48:28.384444] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:21.296 [2024-05-12 04:48:28.384595] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60569 ] 00:08:21.555 [2024-05-12 04:48:28.555417] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.814 [2024-05-12 04:48:28.712835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.814 04:48:28 -- accel/accel.sh@21 -- # val= 00:08:21.814 04:48:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.814 04:48:28 -- accel/accel.sh@20 -- # IFS=: 00:08:21.814 04:48:28 -- accel/accel.sh@20 -- # read -r var val 00:08:21.814 04:48:28 -- accel/accel.sh@21 -- # val= 00:08:21.814 04:48:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.814 04:48:28 -- accel/accel.sh@20 -- # IFS=: 00:08:21.814 04:48:28 -- accel/accel.sh@20 -- # read -r var val 00:08:21.814 04:48:28 -- accel/accel.sh@21 -- # val= 00:08:21.814 04:48:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.814 04:48:28 -- accel/accel.sh@20 -- # IFS=: 00:08:21.814 04:48:28 -- accel/accel.sh@20 -- # read -r var val 00:08:21.814 04:48:28 -- accel/accel.sh@21 -- # val=0x1 00:08:21.814 04:48:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.814 04:48:28 -- accel/accel.sh@20 -- # IFS=: 00:08:21.814 04:48:28 -- accel/accel.sh@20 -- # read -r var val 00:08:21.814 04:48:28 -- accel/accel.sh@21 -- # val= 00:08:21.814 04:48:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.814 04:48:28 -- accel/accel.sh@20 -- # IFS=: 00:08:21.814 04:48:28 -- accel/accel.sh@20 -- # read -r var val 00:08:21.814 04:48:28 -- accel/accel.sh@21 -- # val= 00:08:21.814 04:48:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.814 04:48:28 -- accel/accel.sh@20 -- # IFS=: 00:08:21.814 04:48:28 -- accel/accel.sh@20 -- # read -r var val 00:08:21.814 04:48:28 -- accel/accel.sh@21 -- # val=decompress 00:08:21.814 04:48:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.814 04:48:28 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:21.814 04:48:28 -- accel/accel.sh@20 -- # IFS=: 00:08:21.814 04:48:28 -- accel/accel.sh@20 -- # read -r var val 00:08:21.814 04:48:28 -- accel/accel.sh@21 -- # val='111250 bytes' 00:08:21.814 04:48:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.814 04:48:28 -- accel/accel.sh@20 -- # IFS=: 00:08:21.814 04:48:28 -- accel/accel.sh@20 -- # read -r var val 00:08:21.814 04:48:28 -- accel/accel.sh@21 -- # val= 00:08:21.814 04:48:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.814 04:48:28 -- accel/accel.sh@20 -- # IFS=: 00:08:21.814 04:48:28 -- accel/accel.sh@20 -- # read -r var val 00:08:21.814 04:48:28 -- accel/accel.sh@21 -- # val=software 00:08:21.814 04:48:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.814 04:48:28 -- accel/accel.sh@23 -- # accel_module=software 00:08:21.814 04:48:28 -- accel/accel.sh@20 -- # IFS=: 00:08:21.814 04:48:28 -- accel/accel.sh@20 -- # read -r var val 00:08:21.814 04:48:28 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:21.814 04:48:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.814 04:48:28 -- accel/accel.sh@20 -- # IFS=: 00:08:21.814 04:48:28 -- accel/accel.sh@20 -- # read -r var val 00:08:21.814 04:48:28 -- accel/accel.sh@21 -- # val=32 00:08:21.814 04:48:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.814 04:48:28 -- accel/accel.sh@20 -- # IFS=: 00:08:21.814 04:48:28 -- accel/accel.sh@20 -- # read -r var val 00:08:21.814 04:48:28 -- accel/accel.sh@21 -- # val=32 00:08:21.814 04:48:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.814 04:48:28 -- accel/accel.sh@20 -- # IFS=: 00:08:21.814 04:48:28 -- accel/accel.sh@20 -- # read -r var val 00:08:21.814 04:48:28 -- accel/accel.sh@21 -- # val=1 00:08:21.814 04:48:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.814 04:48:28 -- accel/accel.sh@20 -- # IFS=: 00:08:21.814 04:48:28 -- accel/accel.sh@20 -- # read -r var val 00:08:21.814 04:48:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:21.814 04:48:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.814 04:48:28 -- accel/accel.sh@20 -- # IFS=: 00:08:21.814 04:48:28 -- accel/accel.sh@20 -- # read -r var val 00:08:21.814 04:48:28 -- accel/accel.sh@21 -- # val=Yes 00:08:21.814 04:48:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.814 04:48:28 -- accel/accel.sh@20 -- # IFS=: 00:08:21.814 04:48:28 -- accel/accel.sh@20 -- # read -r var val 00:08:21.814 04:48:28 -- accel/accel.sh@21 -- # val= 00:08:21.814 04:48:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.814 04:48:28 -- accel/accel.sh@20 -- # IFS=: 00:08:21.814 04:48:28 -- accel/accel.sh@20 -- # read -r var val 00:08:21.814 04:48:28 -- accel/accel.sh@21 -- # val= 00:08:21.814 04:48:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.814 04:48:28 -- accel/accel.sh@20 -- # IFS=: 00:08:21.814 04:48:28 -- accel/accel.sh@20 -- # read -r var val 00:08:23.713 04:48:30 -- accel/accel.sh@21 -- # val= 00:08:23.713 04:48:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.713 04:48:30 -- accel/accel.sh@20 -- # IFS=: 00:08:23.713 04:48:30 -- accel/accel.sh@20 -- # read -r var val 00:08:23.713 04:48:30 -- accel/accel.sh@21 -- # val= 00:08:23.713 04:48:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.713 04:48:30 -- accel/accel.sh@20 -- # IFS=: 00:08:23.713 04:48:30 -- accel/accel.sh@20 -- # read -r var val 00:08:23.713 04:48:30 -- accel/accel.sh@21 -- # val= 00:08:23.713 04:48:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.713 04:48:30 -- accel/accel.sh@20 -- # IFS=: 00:08:23.713 04:48:30 -- accel/accel.sh@20 -- # read -r var val 00:08:23.713 04:48:30 -- accel/accel.sh@21 -- # val= 00:08:23.713 04:48:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.713 04:48:30 -- accel/accel.sh@20 -- # IFS=: 00:08:23.713 04:48:30 -- accel/accel.sh@20 -- # read -r var val 00:08:23.713 04:48:30 -- accel/accel.sh@21 -- # val= 00:08:23.713 04:48:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.713 04:48:30 -- accel/accel.sh@20 -- # IFS=: 00:08:23.713 04:48:30 -- accel/accel.sh@20 -- # read -r var val 00:08:23.713 04:48:30 -- accel/accel.sh@21 -- # val= 00:08:23.713 04:48:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.713 04:48:30 -- accel/accel.sh@20 -- # IFS=: 00:08:23.713 04:48:30 -- accel/accel.sh@20 -- # read -r var val 00:08:23.713 04:48:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:23.713 04:48:30 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:23.713 04:48:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:23.713 00:08:23.713 real 0m4.601s 00:08:23.713 user 0m4.109s 00:08:23.713 sys 0m0.286s 00:08:23.713 04:48:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:23.713 04:48:30 -- common/autotest_common.sh@10 -- # set +x 00:08:23.713 ************************************ 00:08:23.713 END TEST accel_decmop_full 00:08:23.713 ************************************ 00:08:23.713 04:48:30 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:23.713 04:48:30 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:08:23.713 04:48:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:23.713 04:48:30 -- common/autotest_common.sh@10 -- # set +x 00:08:23.713 ************************************ 00:08:23.713 START TEST accel_decomp_mcore 00:08:23.713 ************************************ 00:08:23.713 04:48:30 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:23.713 04:48:30 -- accel/accel.sh@16 -- # local accel_opc 00:08:23.713 04:48:30 -- accel/accel.sh@17 -- # local accel_module 00:08:23.713 04:48:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:23.713 04:48:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:23.713 04:48:30 -- accel/accel.sh@12 -- # build_accel_config 00:08:23.713 04:48:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:23.713 04:48:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:23.713 04:48:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:23.713 04:48:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:23.713 04:48:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:23.713 04:48:30 -- accel/accel.sh@41 -- # local IFS=, 00:08:23.713 04:48:30 -- accel/accel.sh@42 -- # jq -r . 00:08:23.713 [2024-05-12 04:48:30.699633] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:23.713 [2024-05-12 04:48:30.699763] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60611 ] 00:08:23.972 [2024-05-12 04:48:30.854853] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:23.972 [2024-05-12 04:48:31.017181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.972 [2024-05-12 04:48:31.017325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:23.972 [2024-05-12 04:48:31.017453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.972 [2024-05-12 04:48:31.017467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:25.880 04:48:32 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:25.880 00:08:25.880 SPDK Configuration: 00:08:25.880 Core mask: 0xf 00:08:25.880 00:08:25.880 Accel Perf Configuration: 00:08:25.880 Workload Type: decompress 00:08:25.880 Transfer size: 4096 bytes 00:08:25.880 Vector count 1 00:08:25.880 Module: software 00:08:25.880 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:25.880 Queue depth: 32 00:08:25.880 Allocate depth: 32 00:08:25.880 # threads/core: 1 00:08:25.880 Run time: 1 seconds 00:08:25.880 Verify: Yes 00:08:25.880 00:08:25.880 Running for 1 seconds... 00:08:25.880 00:08:25.880 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:25.880 ------------------------------------------------------------------------------------ 00:08:25.880 0,0 56832/s 104 MiB/s 0 0 00:08:25.880 3,0 56896/s 104 MiB/s 0 0 00:08:25.880 2,0 57248/s 105 MiB/s 0 0 00:08:25.880 1,0 57120/s 105 MiB/s 0 0 00:08:25.880 ==================================================================================== 00:08:25.880 Total 228096/s 891 MiB/s 0 0' 00:08:25.880 04:48:32 -- accel/accel.sh@20 -- # IFS=: 00:08:25.880 04:48:32 -- accel/accel.sh@20 -- # read -r var val 00:08:25.880 04:48:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:25.880 04:48:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:25.880 04:48:32 -- accel/accel.sh@12 -- # build_accel_config 00:08:25.880 04:48:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:25.880 04:48:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:25.880 04:48:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:25.880 04:48:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:25.880 04:48:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:25.880 04:48:32 -- accel/accel.sh@41 -- # local IFS=, 00:08:25.880 04:48:32 -- accel/accel.sh@42 -- # jq -r . 00:08:26.139 [2024-05-12 04:48:33.029755] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:26.139 [2024-05-12 04:48:33.030002] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60640 ] 00:08:26.139 [2024-05-12 04:48:33.215849] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:26.397 [2024-05-12 04:48:33.419345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.397 [2024-05-12 04:48:33.419499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:26.397 [2024-05-12 04:48:33.419618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.397 [2024-05-12 04:48:33.419624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:26.656 04:48:33 -- accel/accel.sh@21 -- # val= 00:08:26.656 04:48:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:26.656 04:48:33 -- accel/accel.sh@20 -- # IFS=: 00:08:26.656 04:48:33 -- accel/accel.sh@20 -- # read -r var val 00:08:26.656 04:48:33 -- accel/accel.sh@21 -- # val= 00:08:26.656 04:48:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:26.656 04:48:33 -- accel/accel.sh@20 -- # IFS=: 00:08:26.656 04:48:33 -- accel/accel.sh@20 -- # read -r var val 00:08:26.656 04:48:33 -- accel/accel.sh@21 -- # val= 00:08:26.656 04:48:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:26.656 04:48:33 -- accel/accel.sh@20 -- # IFS=: 00:08:26.656 04:48:33 -- accel/accel.sh@20 -- # read -r var val 00:08:26.656 04:48:33 -- accel/accel.sh@21 -- # val=0xf 00:08:26.656 04:48:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:26.656 04:48:33 -- accel/accel.sh@20 -- # IFS=: 00:08:26.656 04:48:33 -- accel/accel.sh@20 -- # read -r var val 00:08:26.656 04:48:33 -- accel/accel.sh@21 -- # val= 00:08:26.656 04:48:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:26.656 04:48:33 -- accel/accel.sh@20 -- # IFS=: 00:08:26.656 04:48:33 -- accel/accel.sh@20 -- # read -r var val 00:08:26.656 04:48:33 -- accel/accel.sh@21 -- # val= 00:08:26.656 04:48:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:26.656 04:48:33 -- accel/accel.sh@20 -- # IFS=: 00:08:26.656 04:48:33 -- accel/accel.sh@20 -- # read -r var val 00:08:26.656 04:48:33 -- accel/accel.sh@21 -- # val=decompress 00:08:26.656 04:48:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:26.656 04:48:33 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:26.656 04:48:33 -- accel/accel.sh@20 -- # IFS=: 00:08:26.656 04:48:33 -- accel/accel.sh@20 -- # read -r var val 00:08:26.656 04:48:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:26.656 04:48:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:26.656 04:48:33 -- accel/accel.sh@20 -- # IFS=: 00:08:26.656 04:48:33 -- accel/accel.sh@20 -- # read -r var val 00:08:26.656 04:48:33 -- accel/accel.sh@21 -- # val= 00:08:26.656 04:48:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:26.656 04:48:33 -- accel/accel.sh@20 -- # IFS=: 00:08:26.656 04:48:33 -- accel/accel.sh@20 -- # read -r var val 00:08:26.657 04:48:33 -- accel/accel.sh@21 -- # val=software 00:08:26.657 04:48:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:26.657 04:48:33 -- accel/accel.sh@23 -- # accel_module=software 00:08:26.657 04:48:33 -- accel/accel.sh@20 -- # IFS=: 00:08:26.657 04:48:33 -- accel/accel.sh@20 -- # read -r var val 00:08:26.657 04:48:33 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:26.657 04:48:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:26.657 04:48:33 -- accel/accel.sh@20 -- # IFS=: 00:08:26.657 04:48:33 -- accel/accel.sh@20 -- # read -r var val 00:08:26.657 04:48:33 -- accel/accel.sh@21 -- # val=32 00:08:26.657 04:48:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:26.657 04:48:33 -- accel/accel.sh@20 -- # IFS=: 00:08:26.657 04:48:33 -- accel/accel.sh@20 -- # read -r var val 00:08:26.657 04:48:33 -- accel/accel.sh@21 -- # val=32 00:08:26.657 04:48:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:26.657 04:48:33 -- accel/accel.sh@20 -- # IFS=: 00:08:26.657 04:48:33 -- accel/accel.sh@20 -- # read -r var val 00:08:26.657 04:48:33 -- accel/accel.sh@21 -- # val=1 00:08:26.657 04:48:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:26.657 04:48:33 -- accel/accel.sh@20 -- # IFS=: 00:08:26.657 04:48:33 -- accel/accel.sh@20 -- # read -r var val 00:08:26.657 04:48:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:26.657 04:48:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:26.657 04:48:33 -- accel/accel.sh@20 -- # IFS=: 00:08:26.657 04:48:33 -- accel/accel.sh@20 -- # read -r var val 00:08:26.657 04:48:33 -- accel/accel.sh@21 -- # val=Yes 00:08:26.657 04:48:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:26.657 04:48:33 -- accel/accel.sh@20 -- # IFS=: 00:08:26.657 04:48:33 -- accel/accel.sh@20 -- # read -r var val 00:08:26.657 04:48:33 -- accel/accel.sh@21 -- # val= 00:08:26.657 04:48:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:26.657 04:48:33 -- accel/accel.sh@20 -- # IFS=: 00:08:26.657 04:48:33 -- accel/accel.sh@20 -- # read -r var val 00:08:26.657 04:48:33 -- accel/accel.sh@21 -- # val= 00:08:26.657 04:48:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:26.657 04:48:33 -- accel/accel.sh@20 -- # IFS=: 00:08:26.657 04:48:33 -- accel/accel.sh@20 -- # read -r var val 00:08:28.561 04:48:35 -- accel/accel.sh@21 -- # val= 00:08:28.561 04:48:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.561 04:48:35 -- accel/accel.sh@20 -- # IFS=: 00:08:28.561 04:48:35 -- accel/accel.sh@20 -- # read -r var val 00:08:28.561 04:48:35 -- accel/accel.sh@21 -- # val= 00:08:28.561 04:48:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.561 04:48:35 -- accel/accel.sh@20 -- # IFS=: 00:08:28.561 04:48:35 -- accel/accel.sh@20 -- # read -r var val 00:08:28.561 04:48:35 -- accel/accel.sh@21 -- # val= 00:08:28.561 04:48:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.561 04:48:35 -- accel/accel.sh@20 -- # IFS=: 00:08:28.561 04:48:35 -- accel/accel.sh@20 -- # read -r var val 00:08:28.561 04:48:35 -- accel/accel.sh@21 -- # val= 00:08:28.561 04:48:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.561 04:48:35 -- accel/accel.sh@20 -- # IFS=: 00:08:28.561 04:48:35 -- accel/accel.sh@20 -- # read -r var val 00:08:28.561 04:48:35 -- accel/accel.sh@21 -- # val= 00:08:28.561 04:48:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.561 04:48:35 -- accel/accel.sh@20 -- # IFS=: 00:08:28.561 04:48:35 -- accel/accel.sh@20 -- # read -r var val 00:08:28.561 04:48:35 -- accel/accel.sh@21 -- # val= 00:08:28.561 04:48:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.561 04:48:35 -- accel/accel.sh@20 -- # IFS=: 00:08:28.561 04:48:35 -- accel/accel.sh@20 -- # read -r var val 00:08:28.561 04:48:35 -- accel/accel.sh@21 -- # val= 00:08:28.561 04:48:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.561 04:48:35 -- accel/accel.sh@20 -- # IFS=: 00:08:28.561 04:48:35 -- accel/accel.sh@20 -- # read -r var val 00:08:28.561 04:48:35 -- accel/accel.sh@21 -- # val= 00:08:28.561 04:48:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.561 04:48:35 -- accel/accel.sh@20 -- # IFS=: 00:08:28.561 04:48:35 -- accel/accel.sh@20 -- # read -r var val 00:08:28.561 04:48:35 -- accel/accel.sh@21 -- # val= 00:08:28.561 04:48:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.561 04:48:35 -- accel/accel.sh@20 -- # IFS=: 00:08:28.561 04:48:35 -- accel/accel.sh@20 -- # read -r var val 00:08:28.561 04:48:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:28.561 04:48:35 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:28.561 04:48:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:28.561 00:08:28.561 real 0m4.832s 00:08:28.561 user 0m14.111s 00:08:28.561 sys 0m0.339s 00:08:28.561 ************************************ 00:08:28.561 END TEST accel_decomp_mcore 00:08:28.561 ************************************ 00:08:28.561 04:48:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.561 04:48:35 -- common/autotest_common.sh@10 -- # set +x 00:08:28.561 04:48:35 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:28.561 04:48:35 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:08:28.561 04:48:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:28.561 04:48:35 -- common/autotest_common.sh@10 -- # set +x 00:08:28.561 ************************************ 00:08:28.561 START TEST accel_decomp_full_mcore 00:08:28.561 ************************************ 00:08:28.561 04:48:35 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:28.561 04:48:35 -- accel/accel.sh@16 -- # local accel_opc 00:08:28.561 04:48:35 -- accel/accel.sh@17 -- # local accel_module 00:08:28.561 04:48:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:28.561 04:48:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:28.561 04:48:35 -- accel/accel.sh@12 -- # build_accel_config 00:08:28.561 04:48:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:28.561 04:48:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:28.561 04:48:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:28.561 04:48:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:28.561 04:48:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:28.561 04:48:35 -- accel/accel.sh@41 -- # local IFS=, 00:08:28.561 04:48:35 -- accel/accel.sh@42 -- # jq -r . 00:08:28.561 [2024-05-12 04:48:35.594017] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:28.561 [2024-05-12 04:48:35.594204] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60690 ] 00:08:28.820 [2024-05-12 04:48:35.771235] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:29.079 [2024-05-12 04:48:35.978069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:29.079 [2024-05-12 04:48:35.978283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:29.079 [2024-05-12 04:48:35.978328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:29.079 [2024-05-12 04:48:35.978328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.982 04:48:38 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:30.982 00:08:30.982 SPDK Configuration: 00:08:30.982 Core mask: 0xf 00:08:30.982 00:08:30.982 Accel Perf Configuration: 00:08:30.982 Workload Type: decompress 00:08:30.982 Transfer size: 111250 bytes 00:08:30.982 Vector count 1 00:08:30.982 Module: software 00:08:30.982 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:30.982 Queue depth: 32 00:08:30.982 Allocate depth: 32 00:08:30.982 # threads/core: 1 00:08:30.982 Run time: 1 seconds 00:08:30.982 Verify: Yes 00:08:30.982 00:08:30.982 Running for 1 seconds... 00:08:30.982 00:08:30.982 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:30.982 ------------------------------------------------------------------------------------ 00:08:30.982 0,0 4064/s 167 MiB/s 0 0 00:08:30.982 3,0 4096/s 169 MiB/s 0 0 00:08:30.982 2,0 4064/s 167 MiB/s 0 0 00:08:30.982 1,0 4064/s 167 MiB/s 0 0 00:08:30.982 ==================================================================================== 00:08:30.982 Total 16288/s 1728 MiB/s 0 0' 00:08:30.982 04:48:38 -- accel/accel.sh@20 -- # IFS=: 00:08:30.982 04:48:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:30.982 04:48:38 -- accel/accel.sh@20 -- # read -r var val 00:08:30.982 04:48:38 -- accel/accel.sh@12 -- # build_accel_config 00:08:30.982 04:48:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:30.982 04:48:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:30.982 04:48:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:30.982 04:48:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:30.982 04:48:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:30.982 04:48:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:30.982 04:48:38 -- accel/accel.sh@41 -- # local IFS=, 00:08:30.982 04:48:38 -- accel/accel.sh@42 -- # jq -r . 00:08:31.242 [2024-05-12 04:48:38.146983] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:31.242 [2024-05-12 04:48:38.147101] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60724 ] 00:08:31.242 [2024-05-12 04:48:38.314470] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:31.501 [2024-05-12 04:48:38.516899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.501 [2024-05-12 04:48:38.517088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:31.501 [2024-05-12 04:48:38.517248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:31.501 [2024-05-12 04:48:38.517461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.759 04:48:38 -- accel/accel.sh@21 -- # val= 00:08:31.759 04:48:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.759 04:48:38 -- accel/accel.sh@20 -- # IFS=: 00:08:31.759 04:48:38 -- accel/accel.sh@20 -- # read -r var val 00:08:31.759 04:48:38 -- accel/accel.sh@21 -- # val= 00:08:31.759 04:48:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.760 04:48:38 -- accel/accel.sh@20 -- # IFS=: 00:08:31.760 04:48:38 -- accel/accel.sh@20 -- # read -r var val 00:08:31.760 04:48:38 -- accel/accel.sh@21 -- # val= 00:08:31.760 04:48:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.760 04:48:38 -- accel/accel.sh@20 -- # IFS=: 00:08:31.760 04:48:38 -- accel/accel.sh@20 -- # read -r var val 00:08:31.760 04:48:38 -- accel/accel.sh@21 -- # val=0xf 00:08:31.760 04:48:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.760 04:48:38 -- accel/accel.sh@20 -- # IFS=: 00:08:31.760 04:48:38 -- accel/accel.sh@20 -- # read -r var val 00:08:31.760 04:48:38 -- accel/accel.sh@21 -- # val= 00:08:31.760 04:48:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.760 04:48:38 -- accel/accel.sh@20 -- # IFS=: 00:08:31.760 04:48:38 -- accel/accel.sh@20 -- # read -r var val 00:08:31.760 04:48:38 -- accel/accel.sh@21 -- # val= 00:08:31.760 04:48:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.760 04:48:38 -- accel/accel.sh@20 -- # IFS=: 00:08:31.760 04:48:38 -- accel/accel.sh@20 -- # read -r var val 00:08:31.760 04:48:38 -- accel/accel.sh@21 -- # val=decompress 00:08:31.760 04:48:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.760 04:48:38 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:31.760 04:48:38 -- accel/accel.sh@20 -- # IFS=: 00:08:31.760 04:48:38 -- accel/accel.sh@20 -- # read -r var val 00:08:31.760 04:48:38 -- accel/accel.sh@21 -- # val='111250 bytes' 00:08:31.760 04:48:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.760 04:48:38 -- accel/accel.sh@20 -- # IFS=: 00:08:31.760 04:48:38 -- accel/accel.sh@20 -- # read -r var val 00:08:31.760 04:48:38 -- accel/accel.sh@21 -- # val= 00:08:31.760 04:48:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.760 04:48:38 -- accel/accel.sh@20 -- # IFS=: 00:08:31.760 04:48:38 -- accel/accel.sh@20 -- # read -r var val 00:08:31.760 04:48:38 -- accel/accel.sh@21 -- # val=software 00:08:31.760 04:48:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.760 04:48:38 -- accel/accel.sh@23 -- # accel_module=software 00:08:31.760 04:48:38 -- accel/accel.sh@20 -- # IFS=: 00:08:31.760 04:48:38 -- accel/accel.sh@20 -- # read -r var val 00:08:31.760 04:48:38 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:31.760 04:48:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.760 04:48:38 -- accel/accel.sh@20 -- # IFS=: 00:08:31.760 04:48:38 -- accel/accel.sh@20 -- # read -r var val 00:08:31.760 04:48:38 -- accel/accel.sh@21 -- # val=32 00:08:31.760 04:48:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.760 04:48:38 -- accel/accel.sh@20 -- # IFS=: 00:08:31.760 04:48:38 -- accel/accel.sh@20 -- # read -r var val 00:08:31.760 04:48:38 -- accel/accel.sh@21 -- # val=32 00:08:31.760 04:48:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.760 04:48:38 -- accel/accel.sh@20 -- # IFS=: 00:08:31.760 04:48:38 -- accel/accel.sh@20 -- # read -r var val 00:08:31.760 04:48:38 -- accel/accel.sh@21 -- # val=1 00:08:31.760 04:48:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.760 04:48:38 -- accel/accel.sh@20 -- # IFS=: 00:08:31.760 04:48:38 -- accel/accel.sh@20 -- # read -r var val 00:08:31.760 04:48:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:31.760 04:48:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.760 04:48:38 -- accel/accel.sh@20 -- # IFS=: 00:08:31.760 04:48:38 -- accel/accel.sh@20 -- # read -r var val 00:08:31.760 04:48:38 -- accel/accel.sh@21 -- # val=Yes 00:08:31.760 04:48:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.760 04:48:38 -- accel/accel.sh@20 -- # IFS=: 00:08:31.760 04:48:38 -- accel/accel.sh@20 -- # read -r var val 00:08:31.760 04:48:38 -- accel/accel.sh@21 -- # val= 00:08:31.760 04:48:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.760 04:48:38 -- accel/accel.sh@20 -- # IFS=: 00:08:31.760 04:48:38 -- accel/accel.sh@20 -- # read -r var val 00:08:31.760 04:48:38 -- accel/accel.sh@21 -- # val= 00:08:31.760 04:48:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.760 04:48:38 -- accel/accel.sh@20 -- # IFS=: 00:08:31.760 04:48:38 -- accel/accel.sh@20 -- # read -r var val 00:08:33.676 04:48:40 -- accel/accel.sh@21 -- # val= 00:08:33.676 04:48:40 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.676 04:48:40 -- accel/accel.sh@20 -- # IFS=: 00:08:33.676 04:48:40 -- accel/accel.sh@20 -- # read -r var val 00:08:33.676 04:48:40 -- accel/accel.sh@21 -- # val= 00:08:33.676 04:48:40 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.676 04:48:40 -- accel/accel.sh@20 -- # IFS=: 00:08:33.676 04:48:40 -- accel/accel.sh@20 -- # read -r var val 00:08:33.676 04:48:40 -- accel/accel.sh@21 -- # val= 00:08:33.676 04:48:40 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.676 04:48:40 -- accel/accel.sh@20 -- # IFS=: 00:08:33.676 04:48:40 -- accel/accel.sh@20 -- # read -r var val 00:08:33.676 04:48:40 -- accel/accel.sh@21 -- # val= 00:08:33.676 04:48:40 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.676 04:48:40 -- accel/accel.sh@20 -- # IFS=: 00:08:33.676 04:48:40 -- accel/accel.sh@20 -- # read -r var val 00:08:33.676 04:48:40 -- accel/accel.sh@21 -- # val= 00:08:33.676 04:48:40 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.676 04:48:40 -- accel/accel.sh@20 -- # IFS=: 00:08:33.676 04:48:40 -- accel/accel.sh@20 -- # read -r var val 00:08:33.676 04:48:40 -- accel/accel.sh@21 -- # val= 00:08:33.676 04:48:40 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.676 04:48:40 -- accel/accel.sh@20 -- # IFS=: 00:08:33.676 04:48:40 -- accel/accel.sh@20 -- # read -r var val 00:08:33.676 04:48:40 -- accel/accel.sh@21 -- # val= 00:08:33.676 04:48:40 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.676 04:48:40 -- accel/accel.sh@20 -- # IFS=: 00:08:33.676 04:48:40 -- accel/accel.sh@20 -- # read -r var val 00:08:33.676 04:48:40 -- accel/accel.sh@21 -- # val= 00:08:33.676 04:48:40 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.676 04:48:40 -- accel/accel.sh@20 -- # IFS=: 00:08:33.676 04:48:40 -- accel/accel.sh@20 -- # read -r var val 00:08:33.676 04:48:40 -- accel/accel.sh@21 -- # val= 00:08:33.676 04:48:40 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.676 04:48:40 -- accel/accel.sh@20 -- # IFS=: 00:08:33.676 04:48:40 -- accel/accel.sh@20 -- # read -r var val 00:08:33.676 04:48:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:33.677 04:48:40 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:33.677 04:48:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:33.677 00:08:33.677 real 0m5.078s 00:08:33.677 user 0m14.764s 00:08:33.677 sys 0m0.345s 00:08:33.677 04:48:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.677 04:48:40 -- common/autotest_common.sh@10 -- # set +x 00:08:33.677 ************************************ 00:08:33.677 END TEST accel_decomp_full_mcore 00:08:33.677 ************************************ 00:08:33.677 04:48:40 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:33.677 04:48:40 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:08:33.677 04:48:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:33.677 04:48:40 -- common/autotest_common.sh@10 -- # set +x 00:08:33.677 ************************************ 00:08:33.677 START TEST accel_decomp_mthread 00:08:33.677 ************************************ 00:08:33.677 04:48:40 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:33.677 04:48:40 -- accel/accel.sh@16 -- # local accel_opc 00:08:33.677 04:48:40 -- accel/accel.sh@17 -- # local accel_module 00:08:33.677 04:48:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:33.677 04:48:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:33.677 04:48:40 -- accel/accel.sh@12 -- # build_accel_config 00:08:33.677 04:48:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:33.677 04:48:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:33.677 04:48:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:33.677 04:48:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:33.677 04:48:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:33.677 04:48:40 -- accel/accel.sh@41 -- # local IFS=, 00:08:33.677 04:48:40 -- accel/accel.sh@42 -- # jq -r . 00:08:33.677 [2024-05-12 04:48:40.721182] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:33.677 [2024-05-12 04:48:40.721394] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60774 ] 00:08:33.936 [2024-05-12 04:48:40.897869] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.195 [2024-05-12 04:48:41.094477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.098 04:48:43 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:36.098 00:08:36.098 SPDK Configuration: 00:08:36.098 Core mask: 0x1 00:08:36.098 00:08:36.098 Accel Perf Configuration: 00:08:36.098 Workload Type: decompress 00:08:36.098 Transfer size: 4096 bytes 00:08:36.098 Vector count 1 00:08:36.098 Module: software 00:08:36.098 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:36.098 Queue depth: 32 00:08:36.098 Allocate depth: 32 00:08:36.098 # threads/core: 2 00:08:36.098 Run time: 1 seconds 00:08:36.098 Verify: Yes 00:08:36.098 00:08:36.098 Running for 1 seconds... 00:08:36.098 00:08:36.098 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:36.098 ------------------------------------------------------------------------------------ 00:08:36.098 0,1 27904/s 51 MiB/s 0 0 00:08:36.098 0,0 27808/s 51 MiB/s 0 0 00:08:36.098 ==================================================================================== 00:08:36.098 Total 55712/s 217 MiB/s 0 0' 00:08:36.098 04:48:43 -- accel/accel.sh@20 -- # IFS=: 00:08:36.098 04:48:43 -- accel/accel.sh@20 -- # read -r var val 00:08:36.098 04:48:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:36.098 04:48:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:36.098 04:48:43 -- accel/accel.sh@12 -- # build_accel_config 00:08:36.098 04:48:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:36.098 04:48:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:36.098 04:48:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:36.098 04:48:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:36.098 04:48:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:36.098 04:48:43 -- accel/accel.sh@41 -- # local IFS=, 00:08:36.098 04:48:43 -- accel/accel.sh@42 -- # jq -r . 00:08:36.098 [2024-05-12 04:48:43.186716] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:36.098 [2024-05-12 04:48:43.186927] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60800 ] 00:08:36.356 [2024-05-12 04:48:43.356127] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.615 [2024-05-12 04:48:43.516277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.615 04:48:43 -- accel/accel.sh@21 -- # val= 00:08:36.615 04:48:43 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.615 04:48:43 -- accel/accel.sh@20 -- # IFS=: 00:08:36.615 04:48:43 -- accel/accel.sh@20 -- # read -r var val 00:08:36.615 04:48:43 -- accel/accel.sh@21 -- # val= 00:08:36.615 04:48:43 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.615 04:48:43 -- accel/accel.sh@20 -- # IFS=: 00:08:36.615 04:48:43 -- accel/accel.sh@20 -- # read -r var val 00:08:36.615 04:48:43 -- accel/accel.sh@21 -- # val= 00:08:36.615 04:48:43 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.615 04:48:43 -- accel/accel.sh@20 -- # IFS=: 00:08:36.615 04:48:43 -- accel/accel.sh@20 -- # read -r var val 00:08:36.615 04:48:43 -- accel/accel.sh@21 -- # val=0x1 00:08:36.615 04:48:43 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.615 04:48:43 -- accel/accel.sh@20 -- # IFS=: 00:08:36.615 04:48:43 -- accel/accel.sh@20 -- # read -r var val 00:08:36.615 04:48:43 -- accel/accel.sh@21 -- # val= 00:08:36.615 04:48:43 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.615 04:48:43 -- accel/accel.sh@20 -- # IFS=: 00:08:36.615 04:48:43 -- accel/accel.sh@20 -- # read -r var val 00:08:36.615 04:48:43 -- accel/accel.sh@21 -- # val= 00:08:36.615 04:48:43 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.615 04:48:43 -- accel/accel.sh@20 -- # IFS=: 00:08:36.615 04:48:43 -- accel/accel.sh@20 -- # read -r var val 00:08:36.615 04:48:43 -- accel/accel.sh@21 -- # val=decompress 00:08:36.615 04:48:43 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.615 04:48:43 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:36.615 04:48:43 -- accel/accel.sh@20 -- # IFS=: 00:08:36.615 04:48:43 -- accel/accel.sh@20 -- # read -r var val 00:08:36.615 04:48:43 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:36.615 04:48:43 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.615 04:48:43 -- accel/accel.sh@20 -- # IFS=: 00:08:36.615 04:48:43 -- accel/accel.sh@20 -- # read -r var val 00:08:36.615 04:48:43 -- accel/accel.sh@21 -- # val= 00:08:36.615 04:48:43 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.615 04:48:43 -- accel/accel.sh@20 -- # IFS=: 00:08:36.615 04:48:43 -- accel/accel.sh@20 -- # read -r var val 00:08:36.616 04:48:43 -- accel/accel.sh@21 -- # val=software 00:08:36.616 04:48:43 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.616 04:48:43 -- accel/accel.sh@23 -- # accel_module=software 00:08:36.616 04:48:43 -- accel/accel.sh@20 -- # IFS=: 00:08:36.616 04:48:43 -- accel/accel.sh@20 -- # read -r var val 00:08:36.616 04:48:43 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:36.616 04:48:43 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.616 04:48:43 -- accel/accel.sh@20 -- # IFS=: 00:08:36.616 04:48:43 -- accel/accel.sh@20 -- # read -r var val 00:08:36.616 04:48:43 -- accel/accel.sh@21 -- # val=32 00:08:36.616 04:48:43 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.616 04:48:43 -- accel/accel.sh@20 -- # IFS=: 00:08:36.616 04:48:43 -- accel/accel.sh@20 -- # read -r var val 00:08:36.616 04:48:43 -- accel/accel.sh@21 -- # val=32 00:08:36.616 04:48:43 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.616 04:48:43 -- accel/accel.sh@20 -- # IFS=: 00:08:36.616 04:48:43 -- accel/accel.sh@20 -- # read -r var val 00:08:36.616 04:48:43 -- accel/accel.sh@21 -- # val=2 00:08:36.616 04:48:43 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.616 04:48:43 -- accel/accel.sh@20 -- # IFS=: 00:08:36.616 04:48:43 -- accel/accel.sh@20 -- # read -r var val 00:08:36.616 04:48:43 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:36.616 04:48:43 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.616 04:48:43 -- accel/accel.sh@20 -- # IFS=: 00:08:36.616 04:48:43 -- accel/accel.sh@20 -- # read -r var val 00:08:36.616 04:48:43 -- accel/accel.sh@21 -- # val=Yes 00:08:36.616 04:48:43 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.616 04:48:43 -- accel/accel.sh@20 -- # IFS=: 00:08:36.616 04:48:43 -- accel/accel.sh@20 -- # read -r var val 00:08:36.616 04:48:43 -- accel/accel.sh@21 -- # val= 00:08:36.616 04:48:43 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.616 04:48:43 -- accel/accel.sh@20 -- # IFS=: 00:08:36.616 04:48:43 -- accel/accel.sh@20 -- # read -r var val 00:08:36.616 04:48:43 -- accel/accel.sh@21 -- # val= 00:08:36.616 04:48:43 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.616 04:48:43 -- accel/accel.sh@20 -- # IFS=: 00:08:36.616 04:48:43 -- accel/accel.sh@20 -- # read -r var val 00:08:38.519 04:48:45 -- accel/accel.sh@21 -- # val= 00:08:38.519 04:48:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:38.519 04:48:45 -- accel/accel.sh@20 -- # IFS=: 00:08:38.519 04:48:45 -- accel/accel.sh@20 -- # read -r var val 00:08:38.519 04:48:45 -- accel/accel.sh@21 -- # val= 00:08:38.519 04:48:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:38.519 04:48:45 -- accel/accel.sh@20 -- # IFS=: 00:08:38.519 04:48:45 -- accel/accel.sh@20 -- # read -r var val 00:08:38.519 04:48:45 -- accel/accel.sh@21 -- # val= 00:08:38.519 04:48:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:38.519 04:48:45 -- accel/accel.sh@20 -- # IFS=: 00:08:38.519 04:48:45 -- accel/accel.sh@20 -- # read -r var val 00:08:38.519 04:48:45 -- accel/accel.sh@21 -- # val= 00:08:38.519 04:48:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:38.519 04:48:45 -- accel/accel.sh@20 -- # IFS=: 00:08:38.519 04:48:45 -- accel/accel.sh@20 -- # read -r var val 00:08:38.519 04:48:45 -- accel/accel.sh@21 -- # val= 00:08:38.519 04:48:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:38.519 04:48:45 -- accel/accel.sh@20 -- # IFS=: 00:08:38.519 04:48:45 -- accel/accel.sh@20 -- # read -r var val 00:08:38.519 04:48:45 -- accel/accel.sh@21 -- # val= 00:08:38.519 04:48:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:38.519 04:48:45 -- accel/accel.sh@20 -- # IFS=: 00:08:38.519 04:48:45 -- accel/accel.sh@20 -- # read -r var val 00:08:38.519 04:48:45 -- accel/accel.sh@21 -- # val= 00:08:38.519 04:48:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:38.519 04:48:45 -- accel/accel.sh@20 -- # IFS=: 00:08:38.519 04:48:45 -- accel/accel.sh@20 -- # read -r var val 00:08:38.519 04:48:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:38.519 04:48:45 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:38.519 04:48:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:38.519 00:08:38.519 real 0m4.708s 00:08:38.519 user 0m4.218s 00:08:38.519 sys 0m0.273s 00:08:38.519 04:48:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:38.519 ************************************ 00:08:38.519 END TEST accel_decomp_mthread 00:08:38.519 ************************************ 00:08:38.519 04:48:45 -- common/autotest_common.sh@10 -- # set +x 00:08:38.519 04:48:45 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:38.519 04:48:45 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:08:38.519 04:48:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:38.519 04:48:45 -- common/autotest_common.sh@10 -- # set +x 00:08:38.519 ************************************ 00:08:38.519 START TEST accel_deomp_full_mthread 00:08:38.519 ************************************ 00:08:38.519 04:48:45 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:38.519 04:48:45 -- accel/accel.sh@16 -- # local accel_opc 00:08:38.519 04:48:45 -- accel/accel.sh@17 -- # local accel_module 00:08:38.519 04:48:45 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:38.519 04:48:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:38.519 04:48:45 -- accel/accel.sh@12 -- # build_accel_config 00:08:38.519 04:48:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:38.519 04:48:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:38.519 04:48:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:38.519 04:48:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:38.519 04:48:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:38.519 04:48:45 -- accel/accel.sh@41 -- # local IFS=, 00:08:38.519 04:48:45 -- accel/accel.sh@42 -- # jq -r . 00:08:38.519 [2024-05-12 04:48:45.479094] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:38.519 [2024-05-12 04:48:45.479273] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60851 ] 00:08:38.778 [2024-05-12 04:48:45.648227] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.778 [2024-05-12 04:48:45.808271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.682 04:48:47 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:40.682 00:08:40.682 SPDK Configuration: 00:08:40.682 Core mask: 0x1 00:08:40.682 00:08:40.682 Accel Perf Configuration: 00:08:40.682 Workload Type: decompress 00:08:40.682 Transfer size: 111250 bytes 00:08:40.682 Vector count 1 00:08:40.682 Module: software 00:08:40.682 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:40.682 Queue depth: 32 00:08:40.682 Allocate depth: 32 00:08:40.682 # threads/core: 2 00:08:40.682 Run time: 1 seconds 00:08:40.682 Verify: Yes 00:08:40.682 00:08:40.682 Running for 1 seconds... 00:08:40.682 00:08:40.682 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:40.682 ------------------------------------------------------------------------------------ 00:08:40.682 0,1 2528/s 104 MiB/s 0 0 00:08:40.682 0,0 2464/s 101 MiB/s 0 0 00:08:40.682 ==================================================================================== 00:08:40.682 Total 4992/s 529 MiB/s 0 0' 00:08:40.682 04:48:47 -- accel/accel.sh@20 -- # IFS=: 00:08:40.682 04:48:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:40.682 04:48:47 -- accel/accel.sh@20 -- # read -r var val 00:08:40.682 04:48:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:40.682 04:48:47 -- accel/accel.sh@12 -- # build_accel_config 00:08:40.682 04:48:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:40.682 04:48:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:40.682 04:48:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:40.682 04:48:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:40.682 04:48:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:40.682 04:48:47 -- accel/accel.sh@41 -- # local IFS=, 00:08:40.682 04:48:47 -- accel/accel.sh@42 -- # jq -r . 00:08:40.682 [2024-05-12 04:48:47.759822] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:40.682 [2024-05-12 04:48:47.759984] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60878 ] 00:08:40.940 [2024-05-12 04:48:47.930579] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.200 [2024-05-12 04:48:48.085276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.200 04:48:48 -- accel/accel.sh@21 -- # val= 00:08:41.200 04:48:48 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.200 04:48:48 -- accel/accel.sh@20 -- # IFS=: 00:08:41.200 04:48:48 -- accel/accel.sh@20 -- # read -r var val 00:08:41.200 04:48:48 -- accel/accel.sh@21 -- # val= 00:08:41.200 04:48:48 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.200 04:48:48 -- accel/accel.sh@20 -- # IFS=: 00:08:41.200 04:48:48 -- accel/accel.sh@20 -- # read -r var val 00:08:41.200 04:48:48 -- accel/accel.sh@21 -- # val= 00:08:41.200 04:48:48 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.200 04:48:48 -- accel/accel.sh@20 -- # IFS=: 00:08:41.200 04:48:48 -- accel/accel.sh@20 -- # read -r var val 00:08:41.200 04:48:48 -- accel/accel.sh@21 -- # val=0x1 00:08:41.200 04:48:48 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.200 04:48:48 -- accel/accel.sh@20 -- # IFS=: 00:08:41.200 04:48:48 -- accel/accel.sh@20 -- # read -r var val 00:08:41.200 04:48:48 -- accel/accel.sh@21 -- # val= 00:08:41.200 04:48:48 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.200 04:48:48 -- accel/accel.sh@20 -- # IFS=: 00:08:41.200 04:48:48 -- accel/accel.sh@20 -- # read -r var val 00:08:41.200 04:48:48 -- accel/accel.sh@21 -- # val= 00:08:41.200 04:48:48 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.200 04:48:48 -- accel/accel.sh@20 -- # IFS=: 00:08:41.200 04:48:48 -- accel/accel.sh@20 -- # read -r var val 00:08:41.200 04:48:48 -- accel/accel.sh@21 -- # val=decompress 00:08:41.200 04:48:48 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.200 04:48:48 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:41.200 04:48:48 -- accel/accel.sh@20 -- # IFS=: 00:08:41.200 04:48:48 -- accel/accel.sh@20 -- # read -r var val 00:08:41.200 04:48:48 -- accel/accel.sh@21 -- # val='111250 bytes' 00:08:41.200 04:48:48 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.200 04:48:48 -- accel/accel.sh@20 -- # IFS=: 00:08:41.200 04:48:48 -- accel/accel.sh@20 -- # read -r var val 00:08:41.200 04:48:48 -- accel/accel.sh@21 -- # val= 00:08:41.200 04:48:48 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.200 04:48:48 -- accel/accel.sh@20 -- # IFS=: 00:08:41.200 04:48:48 -- accel/accel.sh@20 -- # read -r var val 00:08:41.200 04:48:48 -- accel/accel.sh@21 -- # val=software 00:08:41.200 04:48:48 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.200 04:48:48 -- accel/accel.sh@23 -- # accel_module=software 00:08:41.200 04:48:48 -- accel/accel.sh@20 -- # IFS=: 00:08:41.200 04:48:48 -- accel/accel.sh@20 -- # read -r var val 00:08:41.200 04:48:48 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:41.200 04:48:48 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.200 04:48:48 -- accel/accel.sh@20 -- # IFS=: 00:08:41.200 04:48:48 -- accel/accel.sh@20 -- # read -r var val 00:08:41.200 04:48:48 -- accel/accel.sh@21 -- # val=32 00:08:41.200 04:48:48 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.200 04:48:48 -- accel/accel.sh@20 -- # IFS=: 00:08:41.200 04:48:48 -- accel/accel.sh@20 -- # read -r var val 00:08:41.200 04:48:48 -- accel/accel.sh@21 -- # val=32 00:08:41.200 04:48:48 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.200 04:48:48 -- accel/accel.sh@20 -- # IFS=: 00:08:41.200 04:48:48 -- accel/accel.sh@20 -- # read -r var val 00:08:41.200 04:48:48 -- accel/accel.sh@21 -- # val=2 00:08:41.200 04:48:48 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.200 04:48:48 -- accel/accel.sh@20 -- # IFS=: 00:08:41.200 04:48:48 -- accel/accel.sh@20 -- # read -r var val 00:08:41.200 04:48:48 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:41.200 04:48:48 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.200 04:48:48 -- accel/accel.sh@20 -- # IFS=: 00:08:41.200 04:48:48 -- accel/accel.sh@20 -- # read -r var val 00:08:41.200 04:48:48 -- accel/accel.sh@21 -- # val=Yes 00:08:41.200 04:48:48 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.200 04:48:48 -- accel/accel.sh@20 -- # IFS=: 00:08:41.200 04:48:48 -- accel/accel.sh@20 -- # read -r var val 00:08:41.200 04:48:48 -- accel/accel.sh@21 -- # val= 00:08:41.200 04:48:48 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.200 04:48:48 -- accel/accel.sh@20 -- # IFS=: 00:08:41.200 04:48:48 -- accel/accel.sh@20 -- # read -r var val 00:08:41.200 04:48:48 -- accel/accel.sh@21 -- # val= 00:08:41.200 04:48:48 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.200 04:48:48 -- accel/accel.sh@20 -- # IFS=: 00:08:41.200 04:48:48 -- accel/accel.sh@20 -- # read -r var val 00:08:43.102 04:48:50 -- accel/accel.sh@21 -- # val= 00:08:43.102 04:48:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:43.102 04:48:50 -- accel/accel.sh@20 -- # IFS=: 00:08:43.102 04:48:50 -- accel/accel.sh@20 -- # read -r var val 00:08:43.102 04:48:50 -- accel/accel.sh@21 -- # val= 00:08:43.102 04:48:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:43.102 04:48:50 -- accel/accel.sh@20 -- # IFS=: 00:08:43.102 04:48:50 -- accel/accel.sh@20 -- # read -r var val 00:08:43.102 04:48:50 -- accel/accel.sh@21 -- # val= 00:08:43.102 04:48:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:43.102 04:48:50 -- accel/accel.sh@20 -- # IFS=: 00:08:43.102 04:48:50 -- accel/accel.sh@20 -- # read -r var val 00:08:43.102 04:48:50 -- accel/accel.sh@21 -- # val= 00:08:43.102 04:48:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:43.102 04:48:50 -- accel/accel.sh@20 -- # IFS=: 00:08:43.102 04:48:50 -- accel/accel.sh@20 -- # read -r var val 00:08:43.102 04:48:50 -- accel/accel.sh@21 -- # val= 00:08:43.102 04:48:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:43.102 04:48:50 -- accel/accel.sh@20 -- # IFS=: 00:08:43.102 04:48:50 -- accel/accel.sh@20 -- # read -r var val 00:08:43.102 04:48:50 -- accel/accel.sh@21 -- # val= 00:08:43.102 04:48:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:43.102 04:48:50 -- accel/accel.sh@20 -- # IFS=: 00:08:43.102 04:48:50 -- accel/accel.sh@20 -- # read -r var val 00:08:43.102 04:48:50 -- accel/accel.sh@21 -- # val= 00:08:43.102 04:48:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:43.102 04:48:50 -- accel/accel.sh@20 -- # IFS=: 00:08:43.102 04:48:50 -- accel/accel.sh@20 -- # read -r var val 00:08:43.103 04:48:50 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:43.103 04:48:50 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:43.103 04:48:50 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:43.103 00:08:43.103 real 0m4.604s 00:08:43.103 user 0m4.144s 00:08:43.103 sys 0m0.251s 00:08:43.103 04:48:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:43.103 ************************************ 00:08:43.103 END TEST accel_deomp_full_mthread 00:08:43.103 04:48:50 -- common/autotest_common.sh@10 -- # set +x 00:08:43.103 ************************************ 00:08:43.103 04:48:50 -- accel/accel.sh@116 -- # [[ n == y ]] 00:08:43.103 04:48:50 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:43.103 04:48:50 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:43.103 04:48:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:43.103 04:48:50 -- accel/accel.sh@129 -- # build_accel_config 00:08:43.103 04:48:50 -- common/autotest_common.sh@10 -- # set +x 00:08:43.103 04:48:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:43.103 04:48:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:43.103 04:48:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:43.103 04:48:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:43.103 04:48:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:43.103 04:48:50 -- accel/accel.sh@41 -- # local IFS=, 00:08:43.103 04:48:50 -- accel/accel.sh@42 -- # jq -r . 00:08:43.103 ************************************ 00:08:43.103 START TEST accel_dif_functional_tests 00:08:43.103 ************************************ 00:08:43.103 04:48:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:43.103 [2024-05-12 04:48:50.172573] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:43.103 [2024-05-12 04:48:50.172743] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60920 ] 00:08:43.361 [2024-05-12 04:48:50.343781] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:43.619 [2024-05-12 04:48:50.520974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.619 [2024-05-12 04:48:50.521093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.619 [2024-05-12 04:48:50.521106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:43.877 00:08:43.877 00:08:43.877 CUnit - A unit testing framework for C - Version 2.1-3 00:08:43.877 http://cunit.sourceforge.net/ 00:08:43.877 00:08:43.877 00:08:43.877 Suite: accel_dif 00:08:43.877 Test: verify: DIF generated, GUARD check ...passed 00:08:43.877 Test: verify: DIF generated, APPTAG check ...passed 00:08:43.877 Test: verify: DIF generated, REFTAG check ...passed 00:08:43.877 Test: verify: DIF not generated, GUARD check ...[2024-05-12 04:48:50.796717] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:43.877 [2024-05-12 04:48:50.796817] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:43.877 passed 00:08:43.877 Test: verify: DIF not generated, APPTAG check ...[2024-05-12 04:48:50.796892] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:43.877 passed 00:08:43.877 Test: verify: DIF not generated, REFTAG check ...[2024-05-12 04:48:50.796963] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:43.877 [2024-05-12 04:48:50.797011] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:43.877 [2024-05-12 04:48:50.797061] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:43.877 passed 00:08:43.877 Test: verify: APPTAG correct, APPTAG check ...passed 00:08:43.877 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:08:43.877 Test: verify: APPTAG incorrect, no APPTAG check ...[2024-05-12 04:48:50.797172] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:43.877 passed 00:08:43.877 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:43.877 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:43.877 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:08:43.877 Test: generate copy: DIF generated, GUARD check ...passed 00:08:43.877 Test: generate copy: DIF generated, APTTAG check ...[2024-05-12 04:48:50.797431] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:43.877 passed 00:08:43.877 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:43.877 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:43.877 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:43.877 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:43.877 Test: generate copy: iovecs-len validate ...[2024-05-12 04:48:50.797906] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:43.877 passed 00:08:43.877 Test: generate copy: buffer alignment validate ...passed 00:08:43.877 00:08:43.877 Run Summary: Type Total Ran Passed Failed Inactive 00:08:43.877 suites 1 1 n/a 0 0 00:08:43.877 tests 20 20 20 0 0 00:08:43.877 asserts 204 204 204 0 n/a 00:08:43.877 00:08:43.877 Elapsed time = 0.003 seconds 00:08:44.813 00:08:44.813 real 0m1.775s 00:08:44.813 user 0m3.428s 00:08:44.813 sys 0m0.193s 00:08:44.813 04:48:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:44.813 ************************************ 00:08:44.813 END TEST accel_dif_functional_tests 00:08:44.813 04:48:51 -- common/autotest_common.sh@10 -- # set +x 00:08:44.813 ************************************ 00:08:44.813 00:08:44.813 real 1m42.554s 00:08:44.813 user 1m53.203s 00:08:44.813 sys 0m7.568s 00:08:44.813 04:48:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:44.813 04:48:51 -- common/autotest_common.sh@10 -- # set +x 00:08:44.813 ************************************ 00:08:44.813 END TEST accel 00:08:44.813 ************************************ 00:08:44.813 04:48:51 -- spdk/autotest.sh@190 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:08:44.813 04:48:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:44.813 04:48:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:44.813 04:48:51 -- common/autotest_common.sh@10 -- # set +x 00:08:44.813 ************************************ 00:08:44.813 START TEST accel_rpc 00:08:44.813 ************************************ 00:08:44.813 04:48:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:08:45.079 * Looking for test storage... 00:08:45.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.079 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:08:45.079 04:48:52 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:45.079 04:48:52 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=61002 00:08:45.079 04:48:52 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:45.079 04:48:52 -- accel/accel_rpc.sh@15 -- # waitforlisten 61002 00:08:45.079 04:48:52 -- common/autotest_common.sh@819 -- # '[' -z 61002 ']' 00:08:45.079 04:48:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.079 04:48:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:45.079 04:48:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.079 04:48:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:45.079 04:48:52 -- common/autotest_common.sh@10 -- # set +x 00:08:45.079 [2024-05-12 04:48:52.118272] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:45.079 [2024-05-12 04:48:52.118449] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61002 ] 00:08:45.338 [2024-05-12 04:48:52.287363] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.595 [2024-05-12 04:48:52.471449] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:45.595 [2024-05-12 04:48:52.471708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.160 04:48:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:46.160 04:48:52 -- common/autotest_common.sh@852 -- # return 0 00:08:46.160 04:48:52 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:46.160 04:48:52 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:46.160 04:48:52 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:46.160 04:48:52 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:46.160 04:48:52 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:46.160 04:48:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:46.160 04:48:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:46.160 04:48:52 -- common/autotest_common.sh@10 -- # set +x 00:08:46.160 ************************************ 00:08:46.160 START TEST accel_assign_opcode 00:08:46.160 ************************************ 00:08:46.160 04:48:52 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:08:46.160 04:48:53 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:46.160 04:48:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.160 04:48:53 -- common/autotest_common.sh@10 -- # set +x 00:08:46.160 [2024-05-12 04:48:53.004606] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:46.160 04:48:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.160 04:48:53 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:46.160 04:48:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.160 04:48:53 -- common/autotest_common.sh@10 -- # set +x 00:08:46.160 [2024-05-12 04:48:53.012529] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:46.160 04:48:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.160 04:48:53 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:46.160 04:48:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.160 04:48:53 -- common/autotest_common.sh@10 -- # set +x 00:08:46.728 04:48:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.728 04:48:53 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:46.728 04:48:53 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:46.728 04:48:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.728 04:48:53 -- accel/accel_rpc.sh@42 -- # grep software 00:08:46.728 04:48:53 -- common/autotest_common.sh@10 -- # set +x 00:08:46.728 04:48:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.728 software 00:08:46.728 ************************************ 00:08:46.728 END TEST accel_assign_opcode 00:08:46.728 ************************************ 00:08:46.728 00:08:46.728 real 0m0.644s 00:08:46.728 user 0m0.043s 00:08:46.728 sys 0m0.012s 00:08:46.728 04:48:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:46.728 04:48:53 -- common/autotest_common.sh@10 -- # set +x 00:08:46.728 04:48:53 -- accel/accel_rpc.sh@55 -- # killprocess 61002 00:08:46.728 04:48:53 -- common/autotest_common.sh@926 -- # '[' -z 61002 ']' 00:08:46.728 04:48:53 -- common/autotest_common.sh@930 -- # kill -0 61002 00:08:46.728 04:48:53 -- common/autotest_common.sh@931 -- # uname 00:08:46.728 04:48:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:46.728 04:48:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61002 00:08:46.728 04:48:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:46.728 killing process with pid 61002 00:08:46.728 04:48:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:46.728 04:48:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61002' 00:08:46.728 04:48:53 -- common/autotest_common.sh@945 -- # kill 61002 00:08:46.728 04:48:53 -- common/autotest_common.sh@950 -- # wait 61002 00:08:48.632 ************************************ 00:08:48.632 END TEST accel_rpc 00:08:48.632 ************************************ 00:08:48.632 00:08:48.632 real 0m3.573s 00:08:48.632 user 0m3.608s 00:08:48.632 sys 0m0.439s 00:08:48.632 04:48:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:48.632 04:48:55 -- common/autotest_common.sh@10 -- # set +x 00:08:48.632 04:48:55 -- spdk/autotest.sh@191 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:48.632 04:48:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:48.632 04:48:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:48.632 04:48:55 -- common/autotest_common.sh@10 -- # set +x 00:08:48.632 ************************************ 00:08:48.632 START TEST app_cmdline 00:08:48.632 ************************************ 00:08:48.632 04:48:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:48.632 * Looking for test storage... 00:08:48.632 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:48.632 04:48:55 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:48.632 04:48:55 -- app/cmdline.sh@17 -- # spdk_tgt_pid=61112 00:08:48.632 04:48:55 -- app/cmdline.sh@18 -- # waitforlisten 61112 00:08:48.632 04:48:55 -- common/autotest_common.sh@819 -- # '[' -z 61112 ']' 00:08:48.632 04:48:55 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:48.632 04:48:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.632 04:48:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:48.632 04:48:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.632 04:48:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:48.632 04:48:55 -- common/autotest_common.sh@10 -- # set +x 00:08:48.632 [2024-05-12 04:48:55.748491] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:48.632 [2024-05-12 04:48:55.748677] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61112 ] 00:08:48.891 [2024-05-12 04:48:55.920522] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.151 [2024-05-12 04:48:56.078933] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:49.151 [2024-05-12 04:48:56.079151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.529 04:48:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:50.529 04:48:57 -- common/autotest_common.sh@852 -- # return 0 00:08:50.529 04:48:57 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:50.529 { 00:08:50.529 "version": "SPDK v24.01.1-pre git sha1 36faa8c31", 00:08:50.529 "fields": { 00:08:50.529 "major": 24, 00:08:50.529 "minor": 1, 00:08:50.529 "patch": 1, 00:08:50.529 "suffix": "-pre", 00:08:50.529 "commit": "36faa8c31" 00:08:50.529 } 00:08:50.529 } 00:08:50.529 04:48:57 -- app/cmdline.sh@22 -- # expected_methods=() 00:08:50.529 04:48:57 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:50.529 04:48:57 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:50.529 04:48:57 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:50.529 04:48:57 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:50.529 04:48:57 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:50.529 04:48:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.529 04:48:57 -- common/autotest_common.sh@10 -- # set +x 00:08:50.529 04:48:57 -- app/cmdline.sh@26 -- # sort 00:08:50.529 04:48:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.529 04:48:57 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:50.529 04:48:57 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:50.529 04:48:57 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:50.529 04:48:57 -- common/autotest_common.sh@640 -- # local es=0 00:08:50.529 04:48:57 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:50.529 04:48:57 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:50.529 04:48:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:50.529 04:48:57 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:50.529 04:48:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:50.529 04:48:57 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:50.529 04:48:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:50.529 04:48:57 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:50.529 04:48:57 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:50.529 04:48:57 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:50.787 request: 00:08:50.787 { 00:08:50.787 "method": "env_dpdk_get_mem_stats", 00:08:50.787 "req_id": 1 00:08:50.787 } 00:08:50.787 Got JSON-RPC error response 00:08:50.787 response: 00:08:50.787 { 00:08:50.787 "code": -32601, 00:08:50.787 "message": "Method not found" 00:08:50.787 } 00:08:50.787 04:48:57 -- common/autotest_common.sh@643 -- # es=1 00:08:50.787 04:48:57 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:50.787 04:48:57 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:50.787 04:48:57 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:50.787 04:48:57 -- app/cmdline.sh@1 -- # killprocess 61112 00:08:50.787 04:48:57 -- common/autotest_common.sh@926 -- # '[' -z 61112 ']' 00:08:50.787 04:48:57 -- common/autotest_common.sh@930 -- # kill -0 61112 00:08:50.787 04:48:57 -- common/autotest_common.sh@931 -- # uname 00:08:50.787 04:48:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:50.787 04:48:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61112 00:08:50.787 killing process with pid 61112 00:08:50.787 04:48:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:50.787 04:48:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:50.787 04:48:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61112' 00:08:50.787 04:48:57 -- common/autotest_common.sh@945 -- # kill 61112 00:08:50.787 04:48:57 -- common/autotest_common.sh@950 -- # wait 61112 00:08:52.692 ************************************ 00:08:52.692 END TEST app_cmdline 00:08:52.692 ************************************ 00:08:52.692 00:08:52.692 real 0m4.091s 00:08:52.692 user 0m4.712s 00:08:52.692 sys 0m0.482s 00:08:52.692 04:48:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:52.692 04:48:59 -- common/autotest_common.sh@10 -- # set +x 00:08:52.692 04:48:59 -- spdk/autotest.sh@192 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:52.692 04:48:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:52.692 04:48:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:52.692 04:48:59 -- common/autotest_common.sh@10 -- # set +x 00:08:52.692 ************************************ 00:08:52.692 START TEST version 00:08:52.692 ************************************ 00:08:52.692 04:48:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:52.692 * Looking for test storage... 00:08:52.692 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:52.692 04:48:59 -- app/version.sh@17 -- # get_header_version major 00:08:52.692 04:48:59 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:52.692 04:48:59 -- app/version.sh@14 -- # cut -f2 00:08:52.692 04:48:59 -- app/version.sh@14 -- # tr -d '"' 00:08:52.692 04:48:59 -- app/version.sh@17 -- # major=24 00:08:52.692 04:48:59 -- app/version.sh@18 -- # get_header_version minor 00:08:52.692 04:48:59 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:52.692 04:48:59 -- app/version.sh@14 -- # cut -f2 00:08:52.692 04:48:59 -- app/version.sh@14 -- # tr -d '"' 00:08:52.692 04:48:59 -- app/version.sh@18 -- # minor=1 00:08:52.692 04:48:59 -- app/version.sh@19 -- # get_header_version patch 00:08:52.692 04:48:59 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:52.692 04:48:59 -- app/version.sh@14 -- # cut -f2 00:08:52.692 04:48:59 -- app/version.sh@14 -- # tr -d '"' 00:08:52.692 04:48:59 -- app/version.sh@19 -- # patch=1 00:08:52.692 04:48:59 -- app/version.sh@20 -- # get_header_version suffix 00:08:52.692 04:48:59 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:52.692 04:48:59 -- app/version.sh@14 -- # cut -f2 00:08:52.692 04:48:59 -- app/version.sh@14 -- # tr -d '"' 00:08:52.692 04:48:59 -- app/version.sh@20 -- # suffix=-pre 00:08:52.692 04:48:59 -- app/version.sh@22 -- # version=24.1 00:08:52.692 04:48:59 -- app/version.sh@25 -- # (( patch != 0 )) 00:08:52.692 04:48:59 -- app/version.sh@25 -- # version=24.1.1 00:08:52.692 04:48:59 -- app/version.sh@28 -- # version=24.1.1rc0 00:08:52.692 04:48:59 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:52.692 04:48:59 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:52.951 04:48:59 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:08:52.951 04:48:59 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:08:52.951 00:08:52.951 real 0m0.140s 00:08:52.951 user 0m0.073s 00:08:52.951 sys 0m0.100s 00:08:52.951 ************************************ 00:08:52.951 END TEST version 00:08:52.951 ************************************ 00:08:52.951 04:48:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:52.951 04:48:59 -- common/autotest_common.sh@10 -- # set +x 00:08:52.951 04:48:59 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:08:52.951 04:48:59 -- spdk/autotest.sh@204 -- # uname -s 00:08:52.951 04:48:59 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:08:52.951 04:48:59 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:08:52.951 04:48:59 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:08:52.951 04:48:59 -- spdk/autotest.sh@217 -- # '[' 1 -eq 1 ']' 00:08:52.951 04:48:59 -- spdk/autotest.sh@218 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:52.951 04:48:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:52.951 04:48:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:52.951 04:48:59 -- common/autotest_common.sh@10 -- # set +x 00:08:52.951 ************************************ 00:08:52.951 START TEST blockdev_nvme 00:08:52.951 ************************************ 00:08:52.951 04:48:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:52.951 * Looking for test storage... 00:08:52.951 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:52.951 04:48:59 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:52.951 04:48:59 -- bdev/nbd_common.sh@6 -- # set -e 00:08:52.951 04:48:59 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:52.951 04:48:59 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:52.951 04:48:59 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:52.951 04:48:59 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:52.951 04:48:59 -- bdev/blockdev.sh@18 -- # : 00:08:52.951 04:48:59 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:08:52.951 04:48:59 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:08:52.951 04:48:59 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:08:52.951 04:48:59 -- bdev/blockdev.sh@672 -- # uname -s 00:08:52.951 04:48:59 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:08:52.951 04:48:59 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:08:52.951 04:48:59 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:08:52.951 04:48:59 -- bdev/blockdev.sh@681 -- # crypto_device= 00:08:52.951 04:48:59 -- bdev/blockdev.sh@682 -- # dek= 00:08:52.951 04:48:59 -- bdev/blockdev.sh@683 -- # env_ctx= 00:08:52.951 04:48:59 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:08:52.951 04:48:59 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:08:52.951 04:48:59 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:08:52.951 04:48:59 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:08:52.951 04:48:59 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:08:52.951 04:48:59 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=61279 00:08:52.951 04:48:59 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:52.951 04:48:59 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:52.951 04:48:59 -- bdev/blockdev.sh@47 -- # waitforlisten 61279 00:08:52.951 04:48:59 -- common/autotest_common.sh@819 -- # '[' -z 61279 ']' 00:08:52.951 04:48:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.951 04:48:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:52.951 04:48:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.951 04:48:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:52.951 04:48:59 -- common/autotest_common.sh@10 -- # set +x 00:08:53.210 [2024-05-12 04:49:00.103679] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:53.210 [2024-05-12 04:49:00.103828] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61279 ] 00:08:53.210 [2024-05-12 04:49:00.270147] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.469 [2024-05-12 04:49:00.432181] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:53.469 [2024-05-12 04:49:00.432415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.847 04:49:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:54.847 04:49:01 -- common/autotest_common.sh@852 -- # return 0 00:08:54.847 04:49:01 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:08:54.847 04:49:01 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:08:54.847 04:49:01 -- bdev/blockdev.sh@79 -- # local json 00:08:54.847 04:49:01 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:08:54.847 04:49:01 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:54.847 04:49:01 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:07.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:08.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:09.0" } } ] }'\''' 00:08:54.847 04:49:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:54.847 04:49:01 -- common/autotest_common.sh@10 -- # set +x 00:08:55.106 04:49:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:55.106 04:49:02 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:08:55.106 04:49:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:55.106 04:49:02 -- common/autotest_common.sh@10 -- # set +x 00:08:55.106 04:49:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:55.106 04:49:02 -- bdev/blockdev.sh@738 -- # cat 00:08:55.106 04:49:02 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:08:55.106 04:49:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:55.106 04:49:02 -- common/autotest_common.sh@10 -- # set +x 00:08:55.106 04:49:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:55.106 04:49:02 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:08:55.106 04:49:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:55.106 04:49:02 -- common/autotest_common.sh@10 -- # set +x 00:08:55.106 04:49:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:55.106 04:49:02 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:55.106 04:49:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:55.106 04:49:02 -- common/autotest_common.sh@10 -- # set +x 00:08:55.106 04:49:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:55.106 04:49:02 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:08:55.106 04:49:02 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:08:55.106 04:49:02 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:08:55.106 04:49:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:55.106 04:49:02 -- common/autotest_common.sh@10 -- # set +x 00:08:55.367 04:49:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:55.367 04:49:02 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:08:55.367 04:49:02 -- bdev/blockdev.sh@747 -- # jq -r .name 00:08:55.368 04:49:02 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "13ba8877-9530-433e-be51-5d6dda7deb97"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "13ba8877-9530-433e-be51-5d6dda7deb97",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:06.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:06.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "d86c0128-8dcd-4a62-b827-d30db96bf6f9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "d86c0128-8dcd-4a62-b827-d30db96bf6f9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:07.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:07.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "3e937e3d-d0b2-48dc-b68f-7643c4750416"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3e937e3d-d0b2-48dc-b68f-7643c4750416",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "bc6927fb-9353-476a-a2e6-4f6909f62770"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bc6927fb-9353-476a-a2e6-4f6909f62770",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "4f0ba79d-9381-4d47-a70b-ee05e67a4cc3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4f0ba79d-9381-4d47-a70b-ee05e67a4cc3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "98a281fb-d7fc-4a3b-9f6a-d4daf73b9222"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "98a281fb-d7fc-4a3b-9f6a-d4daf73b9222",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:09.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:09.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:55.368 04:49:02 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:08:55.368 04:49:02 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:08:55.368 04:49:02 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:08:55.368 04:49:02 -- bdev/blockdev.sh@752 -- # killprocess 61279 00:08:55.368 04:49:02 -- common/autotest_common.sh@926 -- # '[' -z 61279 ']' 00:08:55.368 04:49:02 -- common/autotest_common.sh@930 -- # kill -0 61279 00:08:55.368 04:49:02 -- common/autotest_common.sh@931 -- # uname 00:08:55.368 04:49:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:55.368 04:49:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61279 00:08:55.368 04:49:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:55.368 04:49:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:55.368 04:49:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61279' 00:08:55.368 killing process with pid 61279 00:08:55.368 04:49:02 -- common/autotest_common.sh@945 -- # kill 61279 00:08:55.368 04:49:02 -- common/autotest_common.sh@950 -- # wait 61279 00:08:57.275 04:49:04 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:57.275 04:49:04 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:57.275 04:49:04 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:57.275 04:49:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:57.275 04:49:04 -- common/autotest_common.sh@10 -- # set +x 00:08:57.275 ************************************ 00:08:57.275 START TEST bdev_hello_world 00:08:57.275 ************************************ 00:08:57.275 04:49:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:57.275 [2024-05-12 04:49:04.173382] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:57.275 [2024-05-12 04:49:04.173590] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61376 ] 00:08:57.275 [2024-05-12 04:49:04.343027] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.535 [2024-05-12 04:49:04.496450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.103 [2024-05-12 04:49:05.049438] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:58.103 [2024-05-12 04:49:05.049507] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:08:58.103 [2024-05-12 04:49:05.049551] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:58.103 [2024-05-12 04:49:05.052235] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:58.103 [2024-05-12 04:49:05.052796] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:58.103 [2024-05-12 04:49:05.052837] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:58.103 [2024-05-12 04:49:05.053157] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:58.103 00:08:58.103 [2024-05-12 04:49:05.053203] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:59.040 00:08:59.040 real 0m1.834s 00:08:59.040 user 0m1.526s 00:08:59.040 sys 0m0.206s 00:08:59.040 04:49:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:59.040 04:49:05 -- common/autotest_common.sh@10 -- # set +x 00:08:59.040 ************************************ 00:08:59.040 END TEST bdev_hello_world 00:08:59.040 ************************************ 00:08:59.040 04:49:05 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:08:59.040 04:49:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:59.040 04:49:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:59.040 04:49:05 -- common/autotest_common.sh@10 -- # set +x 00:08:59.041 ************************************ 00:08:59.041 START TEST bdev_bounds 00:08:59.041 ************************************ 00:08:59.041 04:49:05 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:08:59.041 04:49:05 -- bdev/blockdev.sh@288 -- # bdevio_pid=61418 00:08:59.041 04:49:05 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:59.041 Process bdevio pid: 61418 00:08:59.041 04:49:05 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 61418' 00:08:59.041 04:49:05 -- bdev/blockdev.sh@291 -- # waitforlisten 61418 00:08:59.041 04:49:05 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:59.041 04:49:05 -- common/autotest_common.sh@819 -- # '[' -z 61418 ']' 00:08:59.041 04:49:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.041 04:49:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:59.041 04:49:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.041 04:49:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:59.041 04:49:05 -- common/autotest_common.sh@10 -- # set +x 00:08:59.041 [2024-05-12 04:49:06.070413] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:59.041 [2024-05-12 04:49:06.070578] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61418 ] 00:08:59.299 [2024-05-12 04:49:06.241991] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:59.558 [2024-05-12 04:49:06.428284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.558 [2024-05-12 04:49:06.428358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.559 [2024-05-12 04:49:06.428368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:00.967 04:49:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:00.967 04:49:07 -- common/autotest_common.sh@852 -- # return 0 00:09:00.967 04:49:07 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:09:00.967 I/O targets: 00:09:00.967 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:09:00.967 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:09:00.967 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:00.967 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:00.967 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:00.967 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:09:00.967 00:09:00.967 00:09:00.967 CUnit - A unit testing framework for C - Version 2.1-3 00:09:00.967 http://cunit.sourceforge.net/ 00:09:00.967 00:09:00.967 00:09:00.967 Suite: bdevio tests on: Nvme3n1 00:09:00.967 Test: blockdev write read block ...passed 00:09:00.967 Test: blockdev write zeroes read block ...passed 00:09:00.967 Test: blockdev write zeroes read no split ...passed 00:09:00.967 Test: blockdev write zeroes read split ...passed 00:09:00.967 Test: blockdev write zeroes read split partial ...passed 00:09:00.967 Test: blockdev reset ...[2024-05-12 04:49:07.878060] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:09:00.967 [2024-05-12 04:49:07.881663] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:00.967 passed 00:09:00.967 Test: blockdev write read 8 blocks ...passed 00:09:00.967 Test: blockdev write read size > 128k ...passed 00:09:00.967 Test: blockdev write read invalid size ...passed 00:09:00.967 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:00.967 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:00.967 Test: blockdev write read max offset ...passed 00:09:00.967 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:00.967 Test: blockdev writev readv 8 blocks ...passed 00:09:00.967 Test: blockdev writev readv 30 x 1block ...passed 00:09:00.967 Test: blockdev writev readv block ...passed 00:09:00.967 Test: blockdev writev readv size > 128k ...passed 00:09:00.967 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:00.967 Test: blockdev comparev and writev ...[2024-05-12 04:49:07.890161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x29180e000 len:0x1000 00:09:00.967 [2024-05-12 04:49:07.890408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:00.967 passed 00:09:00.967 Test: blockdev nvme passthru rw ...passed 00:09:00.967 Test: blockdev nvme passthru vendor specific ...[2024-05-12 04:49:07.891624] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:00.967 passed 00:09:00.967 Test: blockdev nvme admin passthru ...[2024-05-12 04:49:07.891845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:00.967 passed 00:09:00.967 Test: blockdev copy ...passed 00:09:00.967 Suite: bdevio tests on: Nvme2n3 00:09:00.967 Test: blockdev write read block ...passed 00:09:00.967 Test: blockdev write zeroes read block ...passed 00:09:00.967 Test: blockdev write zeroes read no split ...passed 00:09:00.967 Test: blockdev write zeroes read split ...passed 00:09:00.967 Test: blockdev write zeroes read split partial ...passed 00:09:00.967 Test: blockdev reset ...[2024-05-12 04:49:07.955214] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:09:00.967 [2024-05-12 04:49:07.958923] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:00.967 passed 00:09:00.967 Test: blockdev write read 8 blocks ...passed 00:09:00.967 Test: blockdev write read size > 128k ...passed 00:09:00.967 Test: blockdev write read invalid size ...passed 00:09:00.967 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:00.967 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:00.967 Test: blockdev write read max offset ...passed 00:09:00.967 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:00.967 Test: blockdev writev readv 8 blocks ...passed 00:09:00.967 Test: blockdev writev readv 30 x 1block ...passed 00:09:00.967 Test: blockdev writev readv block ...passed 00:09:00.967 Test: blockdev writev readv size > 128k ...passed 00:09:00.967 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:00.967 Test: blockdev comparev and writev ...[2024-05-12 04:49:07.966497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x29180a000 len:0x1000 00:09:00.967 [2024-05-12 04:49:07.966557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:00.967 passed 00:09:00.967 Test: blockdev nvme passthru rw ...passed 00:09:00.967 Test: blockdev nvme passthru vendor specific ...[2024-05-12 04:49:07.967581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:00.967 passed 00:09:00.967 Test: blockdev nvme admin passthru ...[2024-05-12 04:49:07.967651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:00.967 passed 00:09:00.967 Test: blockdev copy ...passed 00:09:00.967 Suite: bdevio tests on: Nvme2n2 00:09:00.967 Test: blockdev write read block ...passed 00:09:00.967 Test: blockdev write zeroes read block ...passed 00:09:00.967 Test: blockdev write zeroes read no split ...passed 00:09:00.967 Test: blockdev write zeroes read split ...passed 00:09:00.967 Test: blockdev write zeroes read split partial ...passed 00:09:00.967 Test: blockdev reset ...[2024-05-12 04:49:08.031842] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:09:00.967 [2024-05-12 04:49:08.035478] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:00.967 passed 00:09:00.967 Test: blockdev write read 8 blocks ...passed 00:09:00.967 Test: blockdev write read size > 128k ...passed 00:09:00.967 Test: blockdev write read invalid size ...passed 00:09:00.968 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:00.968 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:00.968 Test: blockdev write read max offset ...passed 00:09:00.968 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:00.968 Test: blockdev writev readv 8 blocks ...passed 00:09:00.968 Test: blockdev writev readv 30 x 1block ...passed 00:09:00.968 Test: blockdev writev readv block ...passed 00:09:00.968 Test: blockdev writev readv size > 128k ...passed 00:09:00.968 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:00.968 Test: blockdev comparev and writev ...[2024-05-12 04:49:08.043031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26c206000 len:0x1000 00:09:00.968 [2024-05-12 04:49:08.043088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:00.968 passed 00:09:00.968 Test: blockdev nvme passthru rw ...passed 00:09:00.968 Test: blockdev nvme passthru vendor specific ...[2024-05-12 04:49:08.043965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:00.968 [2024-05-12 04:49:08.044014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:00.968 passed 00:09:00.968 Test: blockdev nvme admin passthru ...passed 00:09:00.968 Test: blockdev copy ...passed 00:09:00.968 Suite: bdevio tests on: Nvme2n1 00:09:00.968 Test: blockdev write read block ...passed 00:09:00.968 Test: blockdev write zeroes read block ...passed 00:09:00.968 Test: blockdev write zeroes read no split ...passed 00:09:00.968 Test: blockdev write zeroes read split ...passed 00:09:01.226 Test: blockdev write zeroes read split partial ...passed 00:09:01.226 Test: blockdev reset ...[2024-05-12 04:49:08.108272] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:09:01.226 [2024-05-12 04:49:08.112076] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:01.226 passed 00:09:01.226 Test: blockdev write read 8 blocks ...passed 00:09:01.226 Test: blockdev write read size > 128k ...passed 00:09:01.226 Test: blockdev write read invalid size ...passed 00:09:01.226 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:01.226 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:01.226 Test: blockdev write read max offset ...passed 00:09:01.226 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:01.226 Test: blockdev writev readv 8 blocks ...passed 00:09:01.226 Test: blockdev writev readv 30 x 1block ...passed 00:09:01.226 Test: blockdev writev readv block ...passed 00:09:01.226 Test: blockdev writev readv size > 128k ...passed 00:09:01.226 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:01.226 Test: blockdev comparev and writev ...[2024-05-12 04:49:08.119431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26c201000 len:0x1000 00:09:01.226 [2024-05-12 04:49:08.119490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:01.226 passed 00:09:01.226 Test: blockdev nvme passthru rw ...passed 00:09:01.226 Test: blockdev nvme passthru vendor specific ...[2024-05-12 04:49:08.120476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:01.226 passed 00:09:01.226 Test: blockdev nvme admin passthru ...[2024-05-12 04:49:08.120533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:01.226 passed 00:09:01.226 Test: blockdev copy ...passed 00:09:01.226 Suite: bdevio tests on: Nvme1n1 00:09:01.226 Test: blockdev write read block ...passed 00:09:01.226 Test: blockdev write zeroes read block ...passed 00:09:01.226 Test: blockdev write zeroes read no split ...passed 00:09:01.226 Test: blockdev write zeroes read split ...passed 00:09:01.226 Test: blockdev write zeroes read split partial ...passed 00:09:01.226 Test: blockdev reset ...[2024-05-12 04:49:08.186194] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:09:01.226 [2024-05-12 04:49:08.189707] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:01.226 passed 00:09:01.226 Test: blockdev write read 8 blocks ...passed 00:09:01.226 Test: blockdev write read size > 128k ...passed 00:09:01.226 Test: blockdev write read invalid size ...passed 00:09:01.226 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:01.226 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:01.226 Test: blockdev write read max offset ...passed 00:09:01.226 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:01.226 Test: blockdev writev readv 8 blocks ...passed 00:09:01.226 Test: blockdev writev readv 30 x 1block ...passed 00:09:01.226 Test: blockdev writev readv block ...passed 00:09:01.226 Test: blockdev writev readv size > 128k ...passed 00:09:01.226 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:01.226 Test: blockdev comparev and writev ...[2024-05-12 04:49:08.197558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x295a06000 len:0x1000 00:09:01.226 [2024-05-12 04:49:08.197618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:01.226 passed 00:09:01.226 Test: blockdev nvme passthru rw ...passed 00:09:01.226 Test: blockdev nvme passthru vendor specific ...[2024-05-12 04:49:08.198525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:01.226 [2024-05-12 04:49:08.198582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:01.226 passed 00:09:01.226 Test: blockdev nvme admin passthru ...passed 00:09:01.226 Test: blockdev copy ...passed 00:09:01.226 Suite: bdevio tests on: Nvme0n1 00:09:01.226 Test: blockdev write read block ...passed 00:09:01.226 Test: blockdev write zeroes read block ...passed 00:09:01.226 Test: blockdev write zeroes read no split ...passed 00:09:01.226 Test: blockdev write zeroes read split ...passed 00:09:01.226 Test: blockdev write zeroes read split partial ...passed 00:09:01.226 Test: blockdev reset ...[2024-05-12 04:49:08.263396] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:09:01.226 [2024-05-12 04:49:08.266733] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:01.226 passed 00:09:01.226 Test: blockdev write read 8 blocks ...passed 00:09:01.226 Test: blockdev write read size > 128k ...passed 00:09:01.226 Test: blockdev write read invalid size ...passed 00:09:01.226 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:01.226 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:01.226 Test: blockdev write read max offset ...passed 00:09:01.226 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:01.226 Test: blockdev writev readv 8 blocks ...passed 00:09:01.226 Test: blockdev writev readv 30 x 1block ...passed 00:09:01.226 Test: blockdev writev readv block ...passed 00:09:01.226 Test: blockdev writev readv size > 128k ...passed 00:09:01.226 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:01.227 Test: blockdev comparev and writev ...[2024-05-12 04:49:08.273510] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:09:01.227 separate metadata which is not supported yet. 00:09:01.227 passed 00:09:01.227 Test: blockdev nvme passthru rw ...passed 00:09:01.227 Test: blockdev nvme passthru vendor specific ...passed 00:09:01.227 Test: blockdev nvme admin passthru ...[2024-05-12 04:49:08.274103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:09:01.227 [2024-05-12 04:49:08.274168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:09:01.227 passed 00:09:01.227 Test: blockdev copy ...passed 00:09:01.227 00:09:01.227 Run Summary: Type Total Ran Passed Failed Inactive 00:09:01.227 suites 6 6 n/a 0 0 00:09:01.227 tests 138 138 138 0 0 00:09:01.227 asserts 893 893 893 0 n/a 00:09:01.227 00:09:01.227 Elapsed time = 1.244 seconds 00:09:01.227 0 00:09:01.227 04:49:08 -- bdev/blockdev.sh@293 -- # killprocess 61418 00:09:01.227 04:49:08 -- common/autotest_common.sh@926 -- # '[' -z 61418 ']' 00:09:01.227 04:49:08 -- common/autotest_common.sh@930 -- # kill -0 61418 00:09:01.227 04:49:08 -- common/autotest_common.sh@931 -- # uname 00:09:01.227 04:49:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:01.227 04:49:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61418 00:09:01.227 04:49:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:01.227 04:49:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:01.227 killing process with pid 61418 00:09:01.227 04:49:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61418' 00:09:01.227 04:49:08 -- common/autotest_common.sh@945 -- # kill 61418 00:09:01.227 04:49:08 -- common/autotest_common.sh@950 -- # wait 61418 00:09:02.158 04:49:09 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:09:02.158 00:09:02.158 real 0m3.244s 00:09:02.158 user 0m8.558s 00:09:02.158 sys 0m0.361s 00:09:02.158 04:49:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:02.158 ************************************ 00:09:02.158 04:49:09 -- common/autotest_common.sh@10 -- # set +x 00:09:02.158 END TEST bdev_bounds 00:09:02.158 ************************************ 00:09:02.158 04:49:09 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:02.158 04:49:09 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:09:02.158 04:49:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:02.158 04:49:09 -- common/autotest_common.sh@10 -- # set +x 00:09:02.158 ************************************ 00:09:02.158 START TEST bdev_nbd 00:09:02.158 ************************************ 00:09:02.158 04:49:09 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:02.158 04:49:09 -- bdev/blockdev.sh@298 -- # uname -s 00:09:02.416 04:49:09 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:09:02.416 04:49:09 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:02.416 04:49:09 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:02.416 04:49:09 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:02.416 04:49:09 -- bdev/blockdev.sh@302 -- # local bdev_all 00:09:02.416 04:49:09 -- bdev/blockdev.sh@303 -- # local bdev_num=6 00:09:02.416 04:49:09 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:09:02.416 04:49:09 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:02.416 04:49:09 -- bdev/blockdev.sh@309 -- # local nbd_all 00:09:02.416 04:49:09 -- bdev/blockdev.sh@310 -- # bdev_num=6 00:09:02.416 04:49:09 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:02.416 04:49:09 -- bdev/blockdev.sh@312 -- # local nbd_list 00:09:02.416 04:49:09 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:02.416 04:49:09 -- bdev/blockdev.sh@313 -- # local bdev_list 00:09:02.416 04:49:09 -- bdev/blockdev.sh@316 -- # nbd_pid=61485 00:09:02.416 04:49:09 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:02.416 04:49:09 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:09:02.416 04:49:09 -- bdev/blockdev.sh@318 -- # waitforlisten 61485 /var/tmp/spdk-nbd.sock 00:09:02.416 04:49:09 -- common/autotest_common.sh@819 -- # '[' -z 61485 ']' 00:09:02.416 04:49:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:02.416 04:49:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:02.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:02.416 04:49:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:02.416 04:49:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:02.416 04:49:09 -- common/autotest_common.sh@10 -- # set +x 00:09:02.416 [2024-05-12 04:49:09.367055] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:02.416 [2024-05-12 04:49:09.367212] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:02.416 [2024-05-12 04:49:09.525851] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.674 [2024-05-12 04:49:09.689149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.048 04:49:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:04.048 04:49:10 -- common/autotest_common.sh@852 -- # return 0 00:09:04.048 04:49:10 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:04.048 04:49:10 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:04.048 04:49:10 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:04.048 04:49:10 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:09:04.048 04:49:10 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:04.048 04:49:10 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:04.048 04:49:10 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:04.048 04:49:10 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:09:04.048 04:49:10 -- bdev/nbd_common.sh@24 -- # local i 00:09:04.048 04:49:10 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:09:04.048 04:49:10 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:09:04.048 04:49:10 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:04.049 04:49:10 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:09:04.306 04:49:11 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:09:04.306 04:49:11 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:09:04.306 04:49:11 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:09:04.306 04:49:11 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:09:04.307 04:49:11 -- common/autotest_common.sh@857 -- # local i 00:09:04.307 04:49:11 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:04.307 04:49:11 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:04.307 04:49:11 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:09:04.307 04:49:11 -- common/autotest_common.sh@861 -- # break 00:09:04.307 04:49:11 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:04.307 04:49:11 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:04.307 04:49:11 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:04.307 1+0 records in 00:09:04.307 1+0 records out 00:09:04.307 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000835524 s, 4.9 MB/s 00:09:04.307 04:49:11 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:04.307 04:49:11 -- common/autotest_common.sh@874 -- # size=4096 00:09:04.307 04:49:11 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:04.307 04:49:11 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:04.307 04:49:11 -- common/autotest_common.sh@877 -- # return 0 00:09:04.307 04:49:11 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:04.307 04:49:11 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:04.307 04:49:11 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:09:04.565 04:49:11 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:09:04.565 04:49:11 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:09:04.565 04:49:11 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:09:04.565 04:49:11 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:09:04.565 04:49:11 -- common/autotest_common.sh@857 -- # local i 00:09:04.565 04:49:11 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:04.565 04:49:11 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:04.565 04:49:11 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:09:04.565 04:49:11 -- common/autotest_common.sh@861 -- # break 00:09:04.565 04:49:11 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:04.565 04:49:11 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:04.565 04:49:11 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:04.565 1+0 records in 00:09:04.565 1+0 records out 00:09:04.565 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00055682 s, 7.4 MB/s 00:09:04.565 04:49:11 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:04.565 04:49:11 -- common/autotest_common.sh@874 -- # size=4096 00:09:04.565 04:49:11 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:04.565 04:49:11 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:04.565 04:49:11 -- common/autotest_common.sh@877 -- # return 0 00:09:04.565 04:49:11 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:04.565 04:49:11 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:04.565 04:49:11 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:09:04.824 04:49:11 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:09:04.824 04:49:11 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:09:04.824 04:49:11 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:09:04.824 04:49:11 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:09:04.824 04:49:11 -- common/autotest_common.sh@857 -- # local i 00:09:04.824 04:49:11 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:04.824 04:49:11 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:04.824 04:49:11 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:09:04.824 04:49:11 -- common/autotest_common.sh@861 -- # break 00:09:04.824 04:49:11 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:04.824 04:49:11 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:04.824 04:49:11 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:04.824 1+0 records in 00:09:04.824 1+0 records out 00:09:04.824 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00059963 s, 6.8 MB/s 00:09:04.824 04:49:11 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:04.824 04:49:11 -- common/autotest_common.sh@874 -- # size=4096 00:09:04.824 04:49:11 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:04.824 04:49:11 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:04.824 04:49:11 -- common/autotest_common.sh@877 -- # return 0 00:09:04.824 04:49:11 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:04.824 04:49:11 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:04.824 04:49:11 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:09:05.082 04:49:12 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:09:05.082 04:49:12 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:09:05.082 04:49:12 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:09:05.082 04:49:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:09:05.082 04:49:12 -- common/autotest_common.sh@857 -- # local i 00:09:05.082 04:49:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:05.082 04:49:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:05.082 04:49:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:09:05.082 04:49:12 -- common/autotest_common.sh@861 -- # break 00:09:05.082 04:49:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:05.082 04:49:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:05.082 04:49:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:05.082 1+0 records in 00:09:05.082 1+0 records out 00:09:05.082 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000588744 s, 7.0 MB/s 00:09:05.082 04:49:12 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:05.082 04:49:12 -- common/autotest_common.sh@874 -- # size=4096 00:09:05.082 04:49:12 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:05.082 04:49:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:05.082 04:49:12 -- common/autotest_common.sh@877 -- # return 0 00:09:05.082 04:49:12 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:05.082 04:49:12 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:05.082 04:49:12 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:09:05.346 04:49:12 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:09:05.346 04:49:12 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:09:05.346 04:49:12 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:09:05.346 04:49:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:09:05.346 04:49:12 -- common/autotest_common.sh@857 -- # local i 00:09:05.346 04:49:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:05.346 04:49:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:05.346 04:49:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:09:05.346 04:49:12 -- common/autotest_common.sh@861 -- # break 00:09:05.346 04:49:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:05.346 04:49:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:05.346 04:49:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:05.346 1+0 records in 00:09:05.346 1+0 records out 00:09:05.346 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000683489 s, 6.0 MB/s 00:09:05.346 04:49:12 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:05.346 04:49:12 -- common/autotest_common.sh@874 -- # size=4096 00:09:05.346 04:49:12 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:05.346 04:49:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:05.346 04:49:12 -- common/autotest_common.sh@877 -- # return 0 00:09:05.346 04:49:12 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:05.346 04:49:12 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:05.346 04:49:12 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:09:05.604 04:49:12 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:09:05.604 04:49:12 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:09:05.604 04:49:12 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:09:05.604 04:49:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:09:05.604 04:49:12 -- common/autotest_common.sh@857 -- # local i 00:09:05.604 04:49:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:05.604 04:49:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:05.604 04:49:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:09:05.604 04:49:12 -- common/autotest_common.sh@861 -- # break 00:09:05.604 04:49:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:05.604 04:49:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:05.604 04:49:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:05.604 1+0 records in 00:09:05.604 1+0 records out 00:09:05.604 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000833621 s, 4.9 MB/s 00:09:05.604 04:49:12 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:05.604 04:49:12 -- common/autotest_common.sh@874 -- # size=4096 00:09:05.604 04:49:12 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:05.604 04:49:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:05.604 04:49:12 -- common/autotest_common.sh@877 -- # return 0 00:09:05.604 04:49:12 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:05.604 04:49:12 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:05.604 04:49:12 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:05.862 04:49:12 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:09:05.862 { 00:09:05.862 "nbd_device": "/dev/nbd0", 00:09:05.862 "bdev_name": "Nvme0n1" 00:09:05.862 }, 00:09:05.862 { 00:09:05.862 "nbd_device": "/dev/nbd1", 00:09:05.862 "bdev_name": "Nvme1n1" 00:09:05.862 }, 00:09:05.862 { 00:09:05.862 "nbd_device": "/dev/nbd2", 00:09:05.862 "bdev_name": "Nvme2n1" 00:09:05.862 }, 00:09:05.862 { 00:09:05.862 "nbd_device": "/dev/nbd3", 00:09:05.862 "bdev_name": "Nvme2n2" 00:09:05.862 }, 00:09:05.862 { 00:09:05.862 "nbd_device": "/dev/nbd4", 00:09:05.862 "bdev_name": "Nvme2n3" 00:09:05.862 }, 00:09:05.862 { 00:09:05.862 "nbd_device": "/dev/nbd5", 00:09:05.862 "bdev_name": "Nvme3n1" 00:09:05.862 } 00:09:05.862 ]' 00:09:05.862 04:49:12 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:09:05.862 04:49:12 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:09:05.862 04:49:12 -- bdev/nbd_common.sh@119 -- # echo '[ 00:09:05.862 { 00:09:05.862 "nbd_device": "/dev/nbd0", 00:09:05.862 "bdev_name": "Nvme0n1" 00:09:05.862 }, 00:09:05.862 { 00:09:05.862 "nbd_device": "/dev/nbd1", 00:09:05.862 "bdev_name": "Nvme1n1" 00:09:05.862 }, 00:09:05.862 { 00:09:05.862 "nbd_device": "/dev/nbd2", 00:09:05.862 "bdev_name": "Nvme2n1" 00:09:05.862 }, 00:09:05.862 { 00:09:05.862 "nbd_device": "/dev/nbd3", 00:09:05.862 "bdev_name": "Nvme2n2" 00:09:05.862 }, 00:09:05.862 { 00:09:05.862 "nbd_device": "/dev/nbd4", 00:09:05.862 "bdev_name": "Nvme2n3" 00:09:05.862 }, 00:09:05.862 { 00:09:05.862 "nbd_device": "/dev/nbd5", 00:09:05.862 "bdev_name": "Nvme3n1" 00:09:05.862 } 00:09:05.862 ]' 00:09:05.862 04:49:12 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:09:05.862 04:49:12 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:05.862 04:49:12 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:09:05.862 04:49:12 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:05.862 04:49:12 -- bdev/nbd_common.sh@51 -- # local i 00:09:05.862 04:49:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:05.862 04:49:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:06.119 04:49:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:06.119 04:49:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:06.119 04:49:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:06.119 04:49:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:06.119 04:49:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:06.119 04:49:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:06.119 04:49:13 -- bdev/nbd_common.sh@41 -- # break 00:09:06.119 04:49:13 -- bdev/nbd_common.sh@45 -- # return 0 00:09:06.119 04:49:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:06.119 04:49:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:06.377 04:49:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:06.377 04:49:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:06.377 04:49:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:06.377 04:49:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:06.377 04:49:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:06.377 04:49:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:06.377 04:49:13 -- bdev/nbd_common.sh@41 -- # break 00:09:06.377 04:49:13 -- bdev/nbd_common.sh@45 -- # return 0 00:09:06.377 04:49:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:06.377 04:49:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:09:06.635 04:49:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:09:06.635 04:49:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:09:06.635 04:49:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:09:06.635 04:49:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:06.635 04:49:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:06.635 04:49:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:09:06.635 04:49:13 -- bdev/nbd_common.sh@41 -- # break 00:09:06.635 04:49:13 -- bdev/nbd_common.sh@45 -- # return 0 00:09:06.635 04:49:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:06.635 04:49:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:09:06.893 04:49:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:09:06.893 04:49:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:09:06.893 04:49:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:09:06.893 04:49:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:06.893 04:49:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:06.893 04:49:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:09:06.893 04:49:13 -- bdev/nbd_common.sh@41 -- # break 00:09:06.893 04:49:13 -- bdev/nbd_common.sh@45 -- # return 0 00:09:06.893 04:49:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:06.893 04:49:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:07.151 04:49:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:07.151 04:49:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:07.151 04:49:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:07.151 04:49:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:07.151 04:49:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:07.151 04:49:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:07.151 04:49:14 -- bdev/nbd_common.sh@41 -- # break 00:09:07.151 04:49:14 -- bdev/nbd_common.sh@45 -- # return 0 00:09:07.151 04:49:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:07.151 04:49:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:07.409 04:49:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:07.409 04:49:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:07.409 04:49:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:07.409 04:49:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:07.409 04:49:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:07.409 04:49:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:07.409 04:49:14 -- bdev/nbd_common.sh@41 -- # break 00:09:07.409 04:49:14 -- bdev/nbd_common.sh@45 -- # return 0 00:09:07.409 04:49:14 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:07.409 04:49:14 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:07.409 04:49:14 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:07.667 04:49:14 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:07.667 04:49:14 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:07.667 04:49:14 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:07.667 04:49:14 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:07.667 04:49:14 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:07.667 04:49:14 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:07.667 04:49:14 -- bdev/nbd_common.sh@65 -- # true 00:09:07.667 04:49:14 -- bdev/nbd_common.sh@65 -- # count=0 00:09:07.667 04:49:14 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:07.667 04:49:14 -- bdev/nbd_common.sh@122 -- # count=0 00:09:07.667 04:49:14 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:07.667 04:49:14 -- bdev/nbd_common.sh@127 -- # return 0 00:09:07.667 04:49:14 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:07.667 04:49:14 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:07.667 04:49:14 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:07.667 04:49:14 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:07.667 04:49:14 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:07.667 04:49:14 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:07.667 04:49:14 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:07.668 04:49:14 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:07.668 04:49:14 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:07.668 04:49:14 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:07.668 04:49:14 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:07.668 04:49:14 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:07.668 04:49:14 -- bdev/nbd_common.sh@12 -- # local i 00:09:07.668 04:49:14 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:07.668 04:49:14 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:07.668 04:49:14 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:09:07.926 /dev/nbd0 00:09:07.926 04:49:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:07.926 04:49:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:07.926 04:49:15 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:09:07.926 04:49:15 -- common/autotest_common.sh@857 -- # local i 00:09:07.926 04:49:15 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:07.926 04:49:15 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:07.926 04:49:15 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:09:07.926 04:49:15 -- common/autotest_common.sh@861 -- # break 00:09:07.926 04:49:15 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:07.926 04:49:15 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:07.926 04:49:15 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:07.926 1+0 records in 00:09:07.926 1+0 records out 00:09:07.926 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000444805 s, 9.2 MB/s 00:09:07.926 04:49:15 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:08.185 04:49:15 -- common/autotest_common.sh@874 -- # size=4096 00:09:08.185 04:49:15 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:08.185 04:49:15 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:08.185 04:49:15 -- common/autotest_common.sh@877 -- # return 0 00:09:08.185 04:49:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:08.185 04:49:15 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:08.185 04:49:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:09:08.185 /dev/nbd1 00:09:08.443 04:49:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:08.443 04:49:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:08.443 04:49:15 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:09:08.443 04:49:15 -- common/autotest_common.sh@857 -- # local i 00:09:08.443 04:49:15 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:08.443 04:49:15 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:08.443 04:49:15 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:09:08.443 04:49:15 -- common/autotest_common.sh@861 -- # break 00:09:08.443 04:49:15 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:08.443 04:49:15 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:08.443 04:49:15 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:08.443 1+0 records in 00:09:08.443 1+0 records out 00:09:08.443 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000506331 s, 8.1 MB/s 00:09:08.443 04:49:15 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:08.443 04:49:15 -- common/autotest_common.sh@874 -- # size=4096 00:09:08.443 04:49:15 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:08.443 04:49:15 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:08.443 04:49:15 -- common/autotest_common.sh@877 -- # return 0 00:09:08.443 04:49:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:08.443 04:49:15 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:08.443 04:49:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:09:08.702 /dev/nbd10 00:09:08.702 04:49:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:08.702 04:49:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:08.702 04:49:15 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:09:08.702 04:49:15 -- common/autotest_common.sh@857 -- # local i 00:09:08.702 04:49:15 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:08.702 04:49:15 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:08.702 04:49:15 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:09:08.702 04:49:15 -- common/autotest_common.sh@861 -- # break 00:09:08.702 04:49:15 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:08.702 04:49:15 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:08.702 04:49:15 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:08.702 1+0 records in 00:09:08.702 1+0 records out 00:09:08.702 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000704693 s, 5.8 MB/s 00:09:08.702 04:49:15 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:08.702 04:49:15 -- common/autotest_common.sh@874 -- # size=4096 00:09:08.702 04:49:15 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:08.702 04:49:15 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:08.702 04:49:15 -- common/autotest_common.sh@877 -- # return 0 00:09:08.702 04:49:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:08.702 04:49:15 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:08.702 04:49:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:09:08.961 /dev/nbd11 00:09:08.961 04:49:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:08.961 04:49:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:08.961 04:49:15 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:09:08.961 04:49:15 -- common/autotest_common.sh@857 -- # local i 00:09:08.961 04:49:15 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:08.961 04:49:15 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:08.961 04:49:15 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:09:08.961 04:49:15 -- common/autotest_common.sh@861 -- # break 00:09:08.961 04:49:15 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:08.961 04:49:15 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:08.961 04:49:15 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:08.961 1+0 records in 00:09:08.961 1+0 records out 00:09:08.961 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000872171 s, 4.7 MB/s 00:09:08.961 04:49:15 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:08.961 04:49:15 -- common/autotest_common.sh@874 -- # size=4096 00:09:08.961 04:49:15 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:08.961 04:49:15 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:08.961 04:49:15 -- common/autotest_common.sh@877 -- # return 0 00:09:08.961 04:49:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:08.961 04:49:15 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:08.961 04:49:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:09:09.220 /dev/nbd12 00:09:09.220 04:49:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:09.220 04:49:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:09.220 04:49:16 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:09:09.220 04:49:16 -- common/autotest_common.sh@857 -- # local i 00:09:09.220 04:49:16 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:09.220 04:49:16 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:09.220 04:49:16 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:09:09.220 04:49:16 -- common/autotest_common.sh@861 -- # break 00:09:09.220 04:49:16 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:09.220 04:49:16 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:09.220 04:49:16 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:09.220 1+0 records in 00:09:09.220 1+0 records out 00:09:09.220 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000701505 s, 5.8 MB/s 00:09:09.220 04:49:16 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:09.220 04:49:16 -- common/autotest_common.sh@874 -- # size=4096 00:09:09.220 04:49:16 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:09.220 04:49:16 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:09.220 04:49:16 -- common/autotest_common.sh@877 -- # return 0 00:09:09.220 04:49:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:09.220 04:49:16 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:09.220 04:49:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:09:09.478 /dev/nbd13 00:09:09.478 04:49:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:09.478 04:49:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:09.478 04:49:16 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:09:09.478 04:49:16 -- common/autotest_common.sh@857 -- # local i 00:09:09.478 04:49:16 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:09.478 04:49:16 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:09.478 04:49:16 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:09:09.478 04:49:16 -- common/autotest_common.sh@861 -- # break 00:09:09.478 04:49:16 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:09.478 04:49:16 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:09.478 04:49:16 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:09.478 1+0 records in 00:09:09.478 1+0 records out 00:09:09.478 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108908 s, 3.8 MB/s 00:09:09.478 04:49:16 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:09.478 04:49:16 -- common/autotest_common.sh@874 -- # size=4096 00:09:09.478 04:49:16 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:09.478 04:49:16 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:09.478 04:49:16 -- common/autotest_common.sh@877 -- # return 0 00:09:09.478 04:49:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:09.478 04:49:16 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:09.478 04:49:16 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:09.478 04:49:16 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:09.478 04:49:16 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:09.737 04:49:16 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:09.737 { 00:09:09.737 "nbd_device": "/dev/nbd0", 00:09:09.737 "bdev_name": "Nvme0n1" 00:09:09.737 }, 00:09:09.737 { 00:09:09.737 "nbd_device": "/dev/nbd1", 00:09:09.737 "bdev_name": "Nvme1n1" 00:09:09.737 }, 00:09:09.737 { 00:09:09.737 "nbd_device": "/dev/nbd10", 00:09:09.737 "bdev_name": "Nvme2n1" 00:09:09.737 }, 00:09:09.737 { 00:09:09.737 "nbd_device": "/dev/nbd11", 00:09:09.737 "bdev_name": "Nvme2n2" 00:09:09.737 }, 00:09:09.737 { 00:09:09.737 "nbd_device": "/dev/nbd12", 00:09:09.737 "bdev_name": "Nvme2n3" 00:09:09.737 }, 00:09:09.737 { 00:09:09.737 "nbd_device": "/dev/nbd13", 00:09:09.737 "bdev_name": "Nvme3n1" 00:09:09.737 } 00:09:09.737 ]' 00:09:09.737 04:49:16 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:09.737 { 00:09:09.737 "nbd_device": "/dev/nbd0", 00:09:09.737 "bdev_name": "Nvme0n1" 00:09:09.737 }, 00:09:09.737 { 00:09:09.737 "nbd_device": "/dev/nbd1", 00:09:09.737 "bdev_name": "Nvme1n1" 00:09:09.737 }, 00:09:09.737 { 00:09:09.737 "nbd_device": "/dev/nbd10", 00:09:09.737 "bdev_name": "Nvme2n1" 00:09:09.737 }, 00:09:09.737 { 00:09:09.737 "nbd_device": "/dev/nbd11", 00:09:09.737 "bdev_name": "Nvme2n2" 00:09:09.737 }, 00:09:09.737 { 00:09:09.737 "nbd_device": "/dev/nbd12", 00:09:09.737 "bdev_name": "Nvme2n3" 00:09:09.737 }, 00:09:09.737 { 00:09:09.737 "nbd_device": "/dev/nbd13", 00:09:09.737 "bdev_name": "Nvme3n1" 00:09:09.737 } 00:09:09.737 ]' 00:09:09.737 04:49:16 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:09.737 04:49:16 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:09.737 /dev/nbd1 00:09:09.737 /dev/nbd10 00:09:09.737 /dev/nbd11 00:09:09.737 /dev/nbd12 00:09:09.737 /dev/nbd13' 00:09:09.737 04:49:16 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:09.737 /dev/nbd1 00:09:09.737 /dev/nbd10 00:09:09.737 /dev/nbd11 00:09:09.737 /dev/nbd12 00:09:09.737 /dev/nbd13' 00:09:09.737 04:49:16 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:09.737 04:49:16 -- bdev/nbd_common.sh@65 -- # count=6 00:09:09.737 04:49:16 -- bdev/nbd_common.sh@66 -- # echo 6 00:09:09.737 04:49:16 -- bdev/nbd_common.sh@95 -- # count=6 00:09:09.737 04:49:16 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:09:09.737 04:49:16 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:09:09.737 04:49:16 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:09.737 04:49:16 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:09.738 04:49:16 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:09.738 04:49:16 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:09.738 04:49:16 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:09.738 04:49:16 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:09.738 256+0 records in 00:09:09.738 256+0 records out 00:09:09.738 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105172 s, 99.7 MB/s 00:09:09.738 04:49:16 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:09.738 04:49:16 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:09.996 256+0 records in 00:09:09.996 256+0 records out 00:09:09.996 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.137176 s, 7.6 MB/s 00:09:09.996 04:49:16 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:09.996 04:49:16 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:09.996 256+0 records in 00:09:09.996 256+0 records out 00:09:09.996 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.164986 s, 6.4 MB/s 00:09:09.996 04:49:17 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:09.996 04:49:17 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:10.255 256+0 records in 00:09:10.255 256+0 records out 00:09:10.255 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.138946 s, 7.5 MB/s 00:09:10.255 04:49:17 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:10.255 04:49:17 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:10.255 256+0 records in 00:09:10.255 256+0 records out 00:09:10.255 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.159529 s, 6.6 MB/s 00:09:10.255 04:49:17 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:10.255 04:49:17 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:10.563 256+0 records in 00:09:10.563 256+0 records out 00:09:10.563 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.136318 s, 7.7 MB/s 00:09:10.563 04:49:17 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:10.563 04:49:17 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:10.563 256+0 records in 00:09:10.563 256+0 records out 00:09:10.563 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.170889 s, 6.1 MB/s 00:09:10.563 04:49:17 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:09:10.563 04:49:17 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:10.563 04:49:17 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:10.563 04:49:17 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:10.563 04:49:17 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:10.563 04:49:17 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:10.563 04:49:17 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:10.563 04:49:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:10.563 04:49:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:10.563 04:49:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:10.563 04:49:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:10.564 04:49:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:10.564 04:49:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:10.564 04:49:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:10.564 04:49:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:10.826 04:49:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:10.826 04:49:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:10.826 04:49:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:10.826 04:49:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:10.826 04:49:17 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:10.826 04:49:17 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:10.826 04:49:17 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:10.826 04:49:17 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:10.826 04:49:17 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:10.826 04:49:17 -- bdev/nbd_common.sh@51 -- # local i 00:09:10.826 04:49:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:10.826 04:49:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:11.085 04:49:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:11.085 04:49:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:11.085 04:49:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:11.085 04:49:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:11.085 04:49:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:11.085 04:49:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:11.085 04:49:17 -- bdev/nbd_common.sh@41 -- # break 00:09:11.085 04:49:17 -- bdev/nbd_common.sh@45 -- # return 0 00:09:11.085 04:49:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:11.085 04:49:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:11.343 04:49:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:11.343 04:49:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:11.343 04:49:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:11.343 04:49:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:11.343 04:49:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:11.343 04:49:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:11.343 04:49:18 -- bdev/nbd_common.sh@41 -- # break 00:09:11.343 04:49:18 -- bdev/nbd_common.sh@45 -- # return 0 00:09:11.343 04:49:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:11.343 04:49:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:11.343 04:49:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:11.601 04:49:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:11.601 04:49:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:11.601 04:49:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:11.601 04:49:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:11.601 04:49:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:11.601 04:49:18 -- bdev/nbd_common.sh@41 -- # break 00:09:11.601 04:49:18 -- bdev/nbd_common.sh@45 -- # return 0 00:09:11.601 04:49:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:11.601 04:49:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:11.601 04:49:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:11.601 04:49:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:11.601 04:49:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:11.601 04:49:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:11.601 04:49:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:11.601 04:49:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:11.601 04:49:18 -- bdev/nbd_common.sh@41 -- # break 00:09:11.601 04:49:18 -- bdev/nbd_common.sh@45 -- # return 0 00:09:11.601 04:49:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:11.601 04:49:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:11.860 04:49:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:11.860 04:49:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:11.860 04:49:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:11.860 04:49:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:11.860 04:49:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:11.860 04:49:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:11.860 04:49:18 -- bdev/nbd_common.sh@41 -- # break 00:09:11.860 04:49:18 -- bdev/nbd_common.sh@45 -- # return 0 00:09:11.860 04:49:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:11.860 04:49:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:12.118 04:49:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:12.118 04:49:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:12.118 04:49:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:12.118 04:49:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:12.118 04:49:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:12.118 04:49:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:12.376 04:49:19 -- bdev/nbd_common.sh@41 -- # break 00:09:12.376 04:49:19 -- bdev/nbd_common.sh@45 -- # return 0 00:09:12.376 04:49:19 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:12.376 04:49:19 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:12.376 04:49:19 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:12.376 04:49:19 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:12.376 04:49:19 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:12.376 04:49:19 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:12.634 04:49:19 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:12.634 04:49:19 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:12.634 04:49:19 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:12.634 04:49:19 -- bdev/nbd_common.sh@65 -- # true 00:09:12.634 04:49:19 -- bdev/nbd_common.sh@65 -- # count=0 00:09:12.634 04:49:19 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:12.634 04:49:19 -- bdev/nbd_common.sh@104 -- # count=0 00:09:12.634 04:49:19 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:12.634 04:49:19 -- bdev/nbd_common.sh@109 -- # return 0 00:09:12.634 04:49:19 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:12.634 04:49:19 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:12.634 04:49:19 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:12.634 04:49:19 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:09:12.634 04:49:19 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:09:12.634 04:49:19 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:12.892 malloc_lvol_verify 00:09:12.892 04:49:19 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:13.151 c667e599-0576-4f14-98cb-188120a2df23 00:09:13.151 04:49:20 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:13.409 791aad93-db9c-411b-b42d-441cb60f8686 00:09:13.409 04:49:20 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:13.666 /dev/nbd0 00:09:13.666 04:49:20 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:09:13.666 mke2fs 1.46.5 (30-Dec-2021) 00:09:13.666 Discarding device blocks: 0/4096 done 00:09:13.666 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:13.666 00:09:13.666 Allocating group tables: 0/1 done 00:09:13.666 Writing inode tables: 0/1 done 00:09:13.666 Creating journal (1024 blocks): done 00:09:13.666 Writing superblocks and filesystem accounting information: 0/1 done 00:09:13.666 00:09:13.666 04:49:20 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:09:13.666 04:49:20 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:13.666 04:49:20 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:13.666 04:49:20 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:13.666 04:49:20 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:13.666 04:49:20 -- bdev/nbd_common.sh@51 -- # local i 00:09:13.666 04:49:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:13.666 04:49:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:13.923 04:49:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:13.923 04:49:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:13.923 04:49:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:13.924 04:49:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:13.924 04:49:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:13.924 04:49:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:13.924 04:49:20 -- bdev/nbd_common.sh@41 -- # break 00:09:13.924 04:49:20 -- bdev/nbd_common.sh@45 -- # return 0 00:09:13.924 04:49:20 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:09:13.924 04:49:20 -- bdev/nbd_common.sh@147 -- # return 0 00:09:13.924 04:49:20 -- bdev/blockdev.sh@324 -- # killprocess 61485 00:09:13.924 04:49:20 -- common/autotest_common.sh@926 -- # '[' -z 61485 ']' 00:09:13.924 04:49:20 -- common/autotest_common.sh@930 -- # kill -0 61485 00:09:13.924 04:49:20 -- common/autotest_common.sh@931 -- # uname 00:09:13.924 04:49:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:13.924 04:49:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61485 00:09:13.924 killing process with pid 61485 00:09:13.924 04:49:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:13.924 04:49:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:13.924 04:49:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61485' 00:09:13.924 04:49:20 -- common/autotest_common.sh@945 -- # kill 61485 00:09:13.924 04:49:20 -- common/autotest_common.sh@950 -- # wait 61485 00:09:14.858 ************************************ 00:09:14.858 END TEST bdev_nbd 00:09:14.858 ************************************ 00:09:14.858 04:49:21 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:09:14.858 00:09:14.858 real 0m12.603s 00:09:14.858 user 0m17.809s 00:09:14.858 sys 0m3.790s 00:09:14.858 04:49:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:14.858 04:49:21 -- common/autotest_common.sh@10 -- # set +x 00:09:14.858 04:49:21 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:09:14.858 04:49:21 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:09:14.858 skipping fio tests on NVMe due to multi-ns failures. 00:09:14.858 04:49:21 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:09:14.858 04:49:21 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:14.858 04:49:21 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:14.858 04:49:21 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:09:14.858 04:49:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:14.858 04:49:21 -- common/autotest_common.sh@10 -- # set +x 00:09:14.858 ************************************ 00:09:14.858 START TEST bdev_verify 00:09:14.858 ************************************ 00:09:14.858 04:49:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:15.116 [2024-05-12 04:49:22.026980] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:15.116 [2024-05-12 04:49:22.027153] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61886 ] 00:09:15.116 [2024-05-12 04:49:22.196543] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:15.373 [2024-05-12 04:49:22.373232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.373 [2024-05-12 04:49:22.373265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.938 Running I/O for 5 seconds... 00:09:21.204 00:09:21.204 Latency(us) 00:09:21.204 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:21.204 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:21.204 Verification LBA range: start 0x0 length 0xbd0bd 00:09:21.204 Nvme0n1 : 5.04 2928.17 11.44 0.00 0.00 43600.79 6196.13 45756.04 00:09:21.204 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:21.204 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:09:21.204 Nvme0n1 : 5.04 2901.68 11.33 0.00 0.00 43978.78 8221.79 54335.30 00:09:21.204 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:21.204 Verification LBA range: start 0x0 length 0xa0000 00:09:21.204 Nvme1n1 : 5.04 2926.47 11.43 0.00 0.00 43583.13 7923.90 43134.60 00:09:21.204 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:21.204 Verification LBA range: start 0xa0000 length 0xa0000 00:09:21.204 Nvme1n1 : 5.04 2900.05 11.33 0.00 0.00 43955.42 10068.71 51475.55 00:09:21.204 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:21.204 Verification LBA range: start 0x0 length 0x80000 00:09:21.204 Nvme2n1 : 5.05 2924.76 11.42 0.00 0.00 43554.72 9711.24 39798.23 00:09:21.204 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:21.204 Verification LBA range: start 0x80000 length 0x80000 00:09:21.204 Nvme2n1 : 5.05 2904.47 11.35 0.00 0.00 43821.99 2785.28 42419.67 00:09:21.204 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:21.204 Verification LBA range: start 0x0 length 0x80000 00:09:21.204 Nvme2n2 : 5.05 2923.87 11.42 0.00 0.00 43508.18 10068.71 36461.85 00:09:21.204 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:21.204 Verification LBA range: start 0x80000 length 0x80000 00:09:21.204 Nvme2n2 : 5.05 2903.63 11.34 0.00 0.00 43774.23 3455.53 42181.35 00:09:21.204 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:21.204 Verification LBA range: start 0x0 length 0x80000 00:09:21.204 Nvme2n3 : 5.05 2929.94 11.45 0.00 0.00 43429.08 1422.43 35508.60 00:09:21.204 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:21.204 Verification LBA range: start 0x80000 length 0x80000 00:09:21.204 Nvme2n3 : 5.06 2909.56 11.37 0.00 0.00 43665.45 2323.55 39321.60 00:09:21.204 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:21.204 Verification LBA range: start 0x0 length 0x20000 00:09:21.204 Nvme3n1 : 5.05 2928.46 11.44 0.00 0.00 43405.83 2964.01 35508.60 00:09:21.204 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:21.204 Verification LBA range: start 0x20000 length 0x20000 00:09:21.204 Nvme3n1 : 5.06 2908.88 11.36 0.00 0.00 43633.27 2621.44 38844.97 00:09:21.204 =================================================================================================================== 00:09:21.204 Total : 34989.95 136.68 0.00 0.00 43658.52 1422.43 54335.30 00:09:31.204 00:09:31.204 real 0m15.193s 00:09:31.204 user 0m29.036s 00:09:31.204 sys 0m0.323s 00:09:31.204 04:49:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:31.204 04:49:37 -- common/autotest_common.sh@10 -- # set +x 00:09:31.204 ************************************ 00:09:31.204 END TEST bdev_verify 00:09:31.204 ************************************ 00:09:31.204 04:49:37 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:31.204 04:49:37 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:09:31.204 04:49:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:31.204 04:49:37 -- common/autotest_common.sh@10 -- # set +x 00:09:31.204 ************************************ 00:09:31.204 START TEST bdev_verify_big_io 00:09:31.204 ************************************ 00:09:31.204 04:49:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:31.204 [2024-05-12 04:49:37.277462] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:31.204 [2024-05-12 04:49:37.277613] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62014 ] 00:09:31.204 [2024-05-12 04:49:37.451128] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:31.204 [2024-05-12 04:49:37.609923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.204 [2024-05-12 04:49:37.609940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.204 Running I/O for 5 seconds... 00:09:37.771 00:09:37.771 Latency(us) 00:09:37.771 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:37.771 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:37.771 Verification LBA range: start 0x0 length 0xbd0b 00:09:37.771 Nvme0n1 : 5.33 253.10 15.82 0.00 0.00 495203.51 57909.99 682527.65 00:09:37.771 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:37.771 Verification LBA range: start 0xbd0b length 0xbd0b 00:09:37.771 Nvme0n1 : 5.36 259.37 16.21 0.00 0.00 485080.84 41228.10 659649.63 00:09:37.771 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:37.771 Verification LBA range: start 0x0 length 0xa000 00:09:37.771 Nvme1n1 : 5.37 258.98 16.19 0.00 0.00 478500.26 38368.35 617706.59 00:09:37.771 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:37.771 Verification LBA range: start 0xa000 length 0xa000 00:09:37.771 Nvme1n1 : 5.36 259.27 16.20 0.00 0.00 478835.20 41943.04 602454.57 00:09:37.771 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:37.771 Verification LBA range: start 0x0 length 0x8000 00:09:37.771 Nvme2n1 : 5.37 258.89 16.18 0.00 0.00 471578.57 38844.97 556698.53 00:09:37.771 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:37.771 Verification LBA range: start 0x8000 length 0x8000 00:09:37.771 Nvme2n1 : 5.37 259.18 16.20 0.00 0.00 472392.71 42657.98 552885.53 00:09:37.771 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:37.771 Verification LBA range: start 0x0 length 0x8000 00:09:37.771 Nvme2n2 : 5.39 265.23 16.58 0.00 0.00 455076.65 16205.27 503316.48 00:09:37.771 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:37.771 Verification LBA range: start 0x8000 length 0x8000 00:09:37.772 Nvme2n2 : 5.38 265.56 16.60 0.00 0.00 456233.61 15132.86 499503.48 00:09:37.772 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:37.772 Verification LBA range: start 0x0 length 0x8000 00:09:37.772 Nvme2n3 : 5.41 273.42 17.09 0.00 0.00 435968.24 10068.71 530007.51 00:09:37.772 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:37.772 Verification LBA range: start 0x8000 length 0x8000 00:09:37.772 Nvme2n3 : 5.39 274.15 17.13 0.00 0.00 437077.17 3693.85 440401.92 00:09:37.772 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:37.772 Verification LBA range: start 0x0 length 0x2000 00:09:37.772 Nvme3n1 : 5.42 289.26 18.08 0.00 0.00 406600.91 5391.83 419430.40 00:09:37.772 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:37.772 Verification LBA range: start 0x2000 length 0x2000 00:09:37.772 Nvme3n1 : 5.40 282.25 17.64 0.00 0.00 418568.25 4527.94 440401.92 00:09:37.772 =================================================================================================================== 00:09:37.772 Total : 3198.67 199.92 0.00 0.00 456456.40 3693.85 682527.65 00:09:38.339 00:09:38.339 real 0m8.035s 00:09:38.339 user 0m14.821s 00:09:38.339 sys 0m0.253s 00:09:38.339 04:49:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:38.339 04:49:45 -- common/autotest_common.sh@10 -- # set +x 00:09:38.339 ************************************ 00:09:38.339 END TEST bdev_verify_big_io 00:09:38.339 ************************************ 00:09:38.339 04:49:45 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:38.339 04:49:45 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:09:38.339 04:49:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:38.339 04:49:45 -- common/autotest_common.sh@10 -- # set +x 00:09:38.339 ************************************ 00:09:38.339 START TEST bdev_write_zeroes 00:09:38.339 ************************************ 00:09:38.339 04:49:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:38.339 [2024-05-12 04:49:45.360430] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:38.339 [2024-05-12 04:49:45.360592] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62118 ] 00:09:38.598 [2024-05-12 04:49:45.528921] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.598 [2024-05-12 04:49:45.699103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.166 Running I/O for 1 seconds... 00:09:40.538 00:09:40.538 Latency(us) 00:09:40.538 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:40.538 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:40.538 Nvme0n1 : 1.01 8830.65 34.49 0.00 0.00 14445.65 6940.86 25856.93 00:09:40.538 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:40.538 Nvme1n1 : 1.02 8816.52 34.44 0.00 0.00 14445.09 11200.70 19660.80 00:09:40.538 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:40.538 Nvme2n1 : 1.02 8803.27 34.39 0.00 0.00 14420.79 7506.85 24307.90 00:09:40.538 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:40.538 Nvme2n2 : 1.02 8840.16 34.53 0.00 0.00 14331.07 7208.96 16562.73 00:09:40.538 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:40.538 Nvme2n3 : 1.02 8826.52 34.48 0.00 0.00 14327.22 7089.80 16920.20 00:09:40.538 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:40.538 Nvme3n1 : 1.02 8813.49 34.43 0.00 0.00 14323.93 6821.70 16801.05 00:09:40.538 =================================================================================================================== 00:09:40.538 Total : 52930.60 206.76 0.00 0.00 14382.10 6821.70 25856.93 00:09:41.473 00:09:41.473 real 0m3.074s 00:09:41.473 user 0m2.746s 00:09:41.473 sys 0m0.207s 00:09:41.473 04:49:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:41.473 ************************************ 00:09:41.473 END TEST bdev_write_zeroes 00:09:41.473 ************************************ 00:09:41.473 04:49:48 -- common/autotest_common.sh@10 -- # set +x 00:09:41.473 04:49:48 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:41.473 04:49:48 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:09:41.473 04:49:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:41.473 04:49:48 -- common/autotest_common.sh@10 -- # set +x 00:09:41.473 ************************************ 00:09:41.473 START TEST bdev_json_nonenclosed 00:09:41.473 ************************************ 00:09:41.474 04:49:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:41.474 [2024-05-12 04:49:48.489988] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:41.474 [2024-05-12 04:49:48.490165] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62177 ] 00:09:41.733 [2024-05-12 04:49:48.659038] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.733 [2024-05-12 04:49:48.811157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.733 [2024-05-12 04:49:48.811387] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:41.733 [2024-05-12 04:49:48.811415] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:42.301 00:09:42.301 real 0m0.740s 00:09:42.301 user 0m0.511s 00:09:42.301 sys 0m0.124s 00:09:42.301 04:49:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:42.301 04:49:49 -- common/autotest_common.sh@10 -- # set +x 00:09:42.301 ************************************ 00:09:42.301 END TEST bdev_json_nonenclosed 00:09:42.301 ************************************ 00:09:42.301 04:49:49 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:42.301 04:49:49 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:09:42.301 04:49:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:42.301 04:49:49 -- common/autotest_common.sh@10 -- # set +x 00:09:42.301 ************************************ 00:09:42.301 START TEST bdev_json_nonarray 00:09:42.301 ************************************ 00:09:42.301 04:49:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:42.301 [2024-05-12 04:49:49.282890] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:42.301 [2024-05-12 04:49:49.283063] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62201 ] 00:09:42.560 [2024-05-12 04:49:49.453134] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.560 [2024-05-12 04:49:49.624579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.560 [2024-05-12 04:49:49.624848] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:42.560 [2024-05-12 04:49:49.624876] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:43.130 00:09:43.130 real 0m0.788s 00:09:43.130 user 0m0.533s 00:09:43.130 sys 0m0.149s 00:09:43.130 04:49:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:43.130 04:49:49 -- common/autotest_common.sh@10 -- # set +x 00:09:43.130 ************************************ 00:09:43.130 END TEST bdev_json_nonarray 00:09:43.130 ************************************ 00:09:43.130 04:49:50 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:09:43.130 04:49:50 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:09:43.130 04:49:50 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:09:43.130 04:49:50 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:09:43.130 04:49:50 -- bdev/blockdev.sh@809 -- # cleanup 00:09:43.130 04:49:50 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:43.130 04:49:50 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:43.130 04:49:50 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:09:43.130 04:49:50 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:09:43.130 04:49:50 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:09:43.130 04:49:50 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:09:43.130 00:09:43.130 real 0m50.134s 00:09:43.130 user 1m20.232s 00:09:43.130 sys 0m6.226s 00:09:43.130 04:49:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:43.130 04:49:50 -- common/autotest_common.sh@10 -- # set +x 00:09:43.130 ************************************ 00:09:43.130 END TEST blockdev_nvme 00:09:43.130 ************************************ 00:09:43.130 04:49:50 -- spdk/autotest.sh@219 -- # uname -s 00:09:43.130 04:49:50 -- spdk/autotest.sh@219 -- # [[ Linux == Linux ]] 00:09:43.130 04:49:50 -- spdk/autotest.sh@220 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:09:43.130 04:49:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:43.130 04:49:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:43.130 04:49:50 -- common/autotest_common.sh@10 -- # set +x 00:09:43.130 ************************************ 00:09:43.130 START TEST blockdev_nvme_gpt 00:09:43.130 ************************************ 00:09:43.130 04:49:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:09:43.130 * Looking for test storage... 00:09:43.130 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:09:43.130 04:49:50 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:43.130 04:49:50 -- bdev/nbd_common.sh@6 -- # set -e 00:09:43.130 04:49:50 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:09:43.130 04:49:50 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:43.130 04:49:50 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:09:43.130 04:49:50 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:09:43.130 04:49:50 -- bdev/blockdev.sh@18 -- # : 00:09:43.130 04:49:50 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:09:43.130 04:49:50 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:09:43.130 04:49:50 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:09:43.130 04:49:50 -- bdev/blockdev.sh@672 -- # uname -s 00:09:43.130 04:49:50 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:09:43.130 04:49:50 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:09:43.130 04:49:50 -- bdev/blockdev.sh@680 -- # test_type=gpt 00:09:43.130 04:49:50 -- bdev/blockdev.sh@681 -- # crypto_device= 00:09:43.130 04:49:50 -- bdev/blockdev.sh@682 -- # dek= 00:09:43.130 04:49:50 -- bdev/blockdev.sh@683 -- # env_ctx= 00:09:43.130 04:49:50 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:09:43.130 04:49:50 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:09:43.130 04:49:50 -- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]] 00:09:43.130 04:49:50 -- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]] 00:09:43.130 04:49:50 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:09:43.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.130 04:49:50 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=62272 00:09:43.130 04:49:50 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:43.130 04:49:50 -- bdev/blockdev.sh@47 -- # waitforlisten 62272 00:09:43.130 04:49:50 -- common/autotest_common.sh@819 -- # '[' -z 62272 ']' 00:09:43.130 04:49:50 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:43.130 04:49:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.130 04:49:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:43.130 04:49:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.130 04:49:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:43.130 04:49:50 -- common/autotest_common.sh@10 -- # set +x 00:09:43.388 [2024-05-12 04:49:50.291763] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:43.388 [2024-05-12 04:49:50.291918] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62272 ] 00:09:43.388 [2024-05-12 04:49:50.455408] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.646 [2024-05-12 04:49:50.615036] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:43.646 [2024-05-12 04:49:50.615298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.023 04:49:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:45.023 04:49:51 -- common/autotest_common.sh@852 -- # return 0 00:09:45.023 04:49:51 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:09:45.023 04:49:51 -- bdev/blockdev.sh@700 -- # setup_gpt_conf 00:09:45.023 04:49:51 -- bdev/blockdev.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:45.282 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:45.540 Waiting for block devices as requested 00:09:45.540 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:09:45.540 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:09:45.797 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:09:45.797 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:09:51.066 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:09:51.066 04:49:57 -- bdev/blockdev.sh@103 -- # get_zoned_devs 00:09:51.066 04:49:57 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:09:51.066 04:49:57 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:09:51.066 04:49:57 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:09:51.066 04:49:57 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:09:51.066 04:49:57 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0c0n1 00:09:51.066 04:49:57 -- common/autotest_common.sh@1647 -- # local device=nvme0c0n1 00:09:51.066 04:49:57 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:09:51.066 04:49:57 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:09:51.066 04:49:57 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:09:51.066 04:49:57 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:09:51.066 04:49:57 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:09:51.066 04:49:57 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:09:51.066 04:49:57 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:09:51.066 04:49:57 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:09:51.066 04:49:57 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:09:51.066 04:49:57 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:09:51.066 04:49:57 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:09:51.066 04:49:57 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:09:51.066 04:49:57 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:09:51.066 04:49:57 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:09:51.066 04:49:57 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:09:51.066 04:49:57 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:09:51.066 04:49:57 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:09:51.066 04:49:57 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:09:51.066 04:49:57 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:09:51.066 04:49:57 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:09:51.066 04:49:57 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:09:51.066 04:49:57 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:09:51.066 04:49:57 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:09:51.066 04:49:57 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n1 00:09:51.066 04:49:57 -- common/autotest_common.sh@1647 -- # local device=nvme2n1 00:09:51.066 04:49:57 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:09:51.066 04:49:57 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:09:51.066 04:49:57 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:09:51.066 04:49:57 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3n1 00:09:51.066 04:49:57 -- common/autotest_common.sh@1647 -- # local device=nvme3n1 00:09:51.066 04:49:57 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:09:51.066 04:49:57 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:09:51.066 04:49:57 -- bdev/blockdev.sh@105 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:06.0/nvme/nvme2/nvme2n1' '/sys/bus/pci/drivers/nvme/0000:00:07.0/nvme/nvme3/nvme3n1' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n1' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n2' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n3' '/sys/bus/pci/drivers/nvme/0000:00:09.0/nvme/nvme0/nvme0c0n1') 00:09:51.066 04:49:57 -- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev 00:09:51.066 04:49:57 -- bdev/blockdev.sh@106 -- # gpt_nvme= 00:09:51.066 04:49:57 -- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}" 00:09:51.066 04:49:57 -- bdev/blockdev.sh@109 -- # [[ -z '' ]] 00:09:51.066 04:49:57 -- bdev/blockdev.sh@110 -- # dev=/dev/nvme2n1 00:09:51.066 04:49:57 -- bdev/blockdev.sh@111 -- # parted /dev/nvme2n1 -ms print 00:09:51.066 04:49:57 -- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme2n1: unrecognised disk label 00:09:51.066 BYT; 00:09:51.066 /dev/nvme2n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:09:51.066 04:49:57 -- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme2n1: unrecognised disk label 00:09:51.066 BYT; 00:09:51.066 /dev/nvme2n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\2\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:09:51.066 04:49:57 -- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme2n1 00:09:51.067 04:49:57 -- bdev/blockdev.sh@114 -- # break 00:09:51.067 04:49:57 -- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme2n1 ]] 00:09:51.067 04:49:57 -- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:09:51.067 04:49:57 -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:51.067 04:49:57 -- bdev/blockdev.sh@126 -- # parted -s /dev/nvme2n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:09:51.067 04:49:57 -- bdev/blockdev.sh@128 -- # get_spdk_gpt_old 00:09:51.067 04:49:57 -- scripts/common.sh@410 -- # local spdk_guid 00:09:51.067 04:49:57 -- scripts/common.sh@412 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:09:51.067 04:49:57 -- scripts/common.sh@414 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:51.067 04:49:57 -- scripts/common.sh@415 -- # IFS='()' 00:09:51.067 04:49:57 -- scripts/common.sh@415 -- # read -r _ spdk_guid _ 00:09:51.067 04:49:57 -- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:51.067 04:49:57 -- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:09:51.067 04:49:57 -- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:51.067 04:49:57 -- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:51.067 04:49:57 -- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:51.067 04:49:57 -- bdev/blockdev.sh@129 -- # get_spdk_gpt 00:09:51.067 04:49:57 -- scripts/common.sh@422 -- # local spdk_guid 00:09:51.067 04:49:57 -- scripts/common.sh@424 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:09:51.067 04:49:57 -- scripts/common.sh@426 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:51.067 04:49:57 -- scripts/common.sh@427 -- # IFS='()' 00:09:51.067 04:49:57 -- scripts/common.sh@427 -- # read -r _ spdk_guid _ 00:09:51.067 04:49:57 -- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:51.067 04:49:57 -- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:09:51.067 04:49:57 -- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:51.067 04:49:57 -- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:51.067 04:49:57 -- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:51.067 04:49:57 -- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme2n1 00:09:52.018 The operation has completed successfully. 00:09:52.018 04:49:58 -- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme2n1 00:09:52.987 The operation has completed successfully. 00:09:52.987 04:49:59 -- bdev/blockdev.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:53.922 lsblk: /dev/nvme0c0n1: not a block device 00:09:53.922 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:54.180 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:09:54.180 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:09:54.180 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:09:54.180 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:09:54.438 04:50:01 -- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs 00:09:54.438 04:50:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:54.438 04:50:01 -- common/autotest_common.sh@10 -- # set +x 00:09:54.438 [] 00:09:54.438 04:50:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:54.438 04:50:01 -- bdev/blockdev.sh@134 -- # setup_nvme_conf 00:09:54.438 04:50:01 -- bdev/blockdev.sh@79 -- # local json 00:09:54.438 04:50:01 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:09:54.438 04:50:01 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:54.438 04:50:01 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:07.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:08.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:09.0" } } ] }'\''' 00:09:54.438 04:50:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:54.438 04:50:01 -- common/autotest_common.sh@10 -- # set +x 00:09:54.697 04:50:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:54.697 04:50:01 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:09:54.697 04:50:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:54.697 04:50:01 -- common/autotest_common.sh@10 -- # set +x 00:09:54.697 04:50:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:54.697 04:50:01 -- bdev/blockdev.sh@738 -- # cat 00:09:54.697 04:50:01 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:09:54.697 04:50:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:54.697 04:50:01 -- common/autotest_common.sh@10 -- # set +x 00:09:54.697 04:50:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:54.697 04:50:01 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:09:54.697 04:50:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:54.697 04:50:01 -- common/autotest_common.sh@10 -- # set +x 00:09:54.697 04:50:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:54.697 04:50:01 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:09:54.697 04:50:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:54.697 04:50:01 -- common/autotest_common.sh@10 -- # set +x 00:09:54.697 04:50:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:54.697 04:50:01 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:09:54.697 04:50:01 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:09:54.697 04:50:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:54.697 04:50:01 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:09:54.697 04:50:01 -- common/autotest_common.sh@10 -- # set +x 00:09:54.957 04:50:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:54.957 04:50:01 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:09:54.957 04:50:01 -- bdev/blockdev.sh@747 -- # jq -r .name 00:09:54.957 04:50:01 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774144,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774143,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 774400,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "611ebcc1-d43e-4df1-bd9a-964709d1743b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "611ebcc1-d43e-4df1-bd9a-964709d1743b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:07.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:07.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "0bb0f91a-9c78-4ddd-969c-5033c3deecd3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0bb0f91a-9c78-4ddd-969c-5033c3deecd3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "693b68b9-9116-459c-aaa4-c94044f80642"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "693b68b9-9116-459c-aaa4-c94044f80642",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "dda4f169-4048-405c-8448-d7e89c3ca00f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "dda4f169-4048-405c-8448-d7e89c3ca00f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "b0b4e4ed-19c6-41d6-adc1-cf34bb430274"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "b0b4e4ed-19c6-41d6-adc1-cf34bb430274",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:09.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:09.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:09:54.957 04:50:01 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:09:54.957 04:50:01 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1 00:09:54.957 04:50:01 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:09:54.957 04:50:01 -- bdev/blockdev.sh@752 -- # killprocess 62272 00:09:54.957 04:50:01 -- common/autotest_common.sh@926 -- # '[' -z 62272 ']' 00:09:54.957 04:50:01 -- common/autotest_common.sh@930 -- # kill -0 62272 00:09:54.957 04:50:01 -- common/autotest_common.sh@931 -- # uname 00:09:54.957 04:50:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:54.957 04:50:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 62272 00:09:54.957 killing process with pid 62272 00:09:54.957 04:50:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:54.957 04:50:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:54.957 04:50:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 62272' 00:09:54.957 04:50:01 -- common/autotest_common.sh@945 -- # kill 62272 00:09:54.957 04:50:01 -- common/autotest_common.sh@950 -- # wait 62272 00:09:56.861 04:50:03 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:56.861 04:50:03 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:09:56.861 04:50:03 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:09:56.861 04:50:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:56.861 04:50:03 -- common/autotest_common.sh@10 -- # set +x 00:09:56.861 ************************************ 00:09:56.861 START TEST bdev_hello_world 00:09:56.861 ************************************ 00:09:56.861 04:50:03 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:09:57.121 [2024-05-12 04:50:04.008825] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:57.121 [2024-05-12 04:50:04.008992] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62973 ] 00:09:57.121 [2024-05-12 04:50:04.179941] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.380 [2024-05-12 04:50:04.362499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.947 [2024-05-12 04:50:04.947010] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:09:57.947 [2024-05-12 04:50:04.947068] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:09:57.947 [2024-05-12 04:50:04.947102] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:09:57.947 [2024-05-12 04:50:04.950058] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:09:57.947 [2024-05-12 04:50:04.950576] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:09:57.947 [2024-05-12 04:50:04.950617] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:09:57.947 [2024-05-12 04:50:04.950843] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:09:57.947 00:09:57.947 [2024-05-12 04:50:04.950881] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:09:59.325 00:09:59.325 real 0m2.121s 00:09:59.325 user 0m1.787s 00:09:59.325 sys 0m0.224s 00:09:59.325 ************************************ 00:09:59.325 END TEST bdev_hello_world 00:09:59.325 ************************************ 00:09:59.325 04:50:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:59.325 04:50:06 -- common/autotest_common.sh@10 -- # set +x 00:09:59.325 04:50:06 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:09:59.325 04:50:06 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:59.325 04:50:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:59.325 04:50:06 -- common/autotest_common.sh@10 -- # set +x 00:09:59.325 ************************************ 00:09:59.325 START TEST bdev_bounds 00:09:59.325 ************************************ 00:09:59.325 04:50:06 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:09:59.325 04:50:06 -- bdev/blockdev.sh@288 -- # bdevio_pid=63016 00:09:59.325 Process bdevio pid: 63016 00:09:59.325 04:50:06 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:09:59.325 04:50:06 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 63016' 00:09:59.325 04:50:06 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:59.325 04:50:06 -- bdev/blockdev.sh@291 -- # waitforlisten 63016 00:09:59.325 04:50:06 -- common/autotest_common.sh@819 -- # '[' -z 63016 ']' 00:09:59.325 04:50:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.325 04:50:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:59.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.325 04:50:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.325 04:50:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:59.325 04:50:06 -- common/autotest_common.sh@10 -- # set +x 00:09:59.325 [2024-05-12 04:50:06.170129] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:59.325 [2024-05-12 04:50:06.170310] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63016 ] 00:09:59.325 [2024-05-12 04:50:06.343332] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:59.584 [2024-05-12 04:50:06.531182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:59.584 [2024-05-12 04:50:06.531332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.584 [2024-05-12 04:50:06.531372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:00.963 04:50:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:00.963 04:50:07 -- common/autotest_common.sh@852 -- # return 0 00:10:00.963 04:50:07 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:10:00.963 I/O targets: 00:10:00.963 Nvme0n1p1: 774144 blocks of 4096 bytes (3024 MiB) 00:10:00.963 Nvme0n1p2: 774143 blocks of 4096 bytes (3024 MiB) 00:10:00.963 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:10:00.963 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:00.963 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:00.963 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:00.963 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:10:00.963 00:10:00.963 00:10:00.963 CUnit - A unit testing framework for C - Version 2.1-3 00:10:00.963 http://cunit.sourceforge.net/ 00:10:00.963 00:10:00.963 00:10:00.963 Suite: bdevio tests on: Nvme3n1 00:10:00.963 Test: blockdev write read block ...passed 00:10:00.963 Test: blockdev write zeroes read block ...passed 00:10:00.963 Test: blockdev write zeroes read no split ...passed 00:10:00.963 Test: blockdev write zeroes read split ...passed 00:10:00.963 Test: blockdev write zeroes read split partial ...passed 00:10:00.963 Test: blockdev reset ...[2024-05-12 04:50:07.986947] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:10:00.963 passed 00:10:00.963 Test: blockdev write read 8 blocks ...[2024-05-12 04:50:07.990932] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:00.963 passed 00:10:00.963 Test: blockdev write read size > 128k ...passed 00:10:00.963 Test: blockdev write read invalid size ...passed 00:10:00.963 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:00.963 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:00.963 Test: blockdev write read max offset ...passed 00:10:00.963 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:00.963 Test: blockdev writev readv 8 blocks ...passed 00:10:00.963 Test: blockdev writev readv 30 x 1block ...passed 00:10:00.963 Test: blockdev writev readv block ...passed 00:10:00.963 Test: blockdev writev readv size > 128k ...passed 00:10:00.963 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:00.963 Test: blockdev comparev and writev ...[2024-05-12 04:50:07.999419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x28be0a000 len:0x1000 00:10:00.963 passed 00:10:00.963 Test: blockdev nvme passthru rw ...[2024-05-12 04:50:07.999696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:00.963 passed 00:10:00.963 Test: blockdev nvme passthru vendor specific ...[2024-05-12 04:50:08.000791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:00.963 passed 00:10:00.963 Test: blockdev nvme admin passthru ...[2024-05-12 04:50:08.001043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:00.963 passed 00:10:00.963 Test: blockdev copy ...passed 00:10:00.963 Suite: bdevio tests on: Nvme2n3 00:10:00.963 Test: blockdev write read block ...passed 00:10:00.963 Test: blockdev write zeroes read block ...passed 00:10:00.963 Test: blockdev write zeroes read no split ...passed 00:10:00.963 Test: blockdev write zeroes read split ...passed 00:10:00.963 Test: blockdev write zeroes read split partial ...passed 00:10:00.963 Test: blockdev reset ...[2024-05-12 04:50:08.065874] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:10:00.963 [2024-05-12 04:50:08.069831] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:00.963 passed 00:10:00.963 Test: blockdev write read 8 blocks ...passed 00:10:00.963 Test: blockdev write read size > 128k ...passed 00:10:00.963 Test: blockdev write read invalid size ...passed 00:10:00.963 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:00.963 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:00.963 Test: blockdev write read max offset ...passed 00:10:00.963 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:00.963 Test: blockdev writev readv 8 blocks ...passed 00:10:00.963 Test: blockdev writev readv 30 x 1block ...passed 00:10:00.963 Test: blockdev writev readv block ...passed 00:10:00.963 Test: blockdev writev readv size > 128k ...passed 00:10:00.963 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:00.963 Test: blockdev comparev and writev ...[2024-05-12 04:50:08.079036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x28af04000 len:0x1000 00:10:00.963 passed 00:10:00.963 Test: blockdev nvme passthru rw ...[2024-05-12 04:50:08.079299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:00.963 passed 00:10:00.963 Test: blockdev nvme passthru vendor specific ...[2024-05-12 04:50:08.080296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:00.963 passed 00:10:00.963 Test: blockdev nvme admin passthru ...[2024-05-12 04:50:08.080533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:00.963 passed 00:10:00.963 Test: blockdev copy ...passed 00:10:00.963 Suite: bdevio tests on: Nvme2n2 00:10:00.963 Test: blockdev write read block ...passed 00:10:01.223 Test: blockdev write zeroes read block ...passed 00:10:01.223 Test: blockdev write zeroes read no split ...passed 00:10:01.223 Test: blockdev write zeroes read split ...passed 00:10:01.223 Test: blockdev write zeroes read split partial ...passed 00:10:01.223 Test: blockdev reset ...[2024-05-12 04:50:08.146050] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:10:01.223 passed 00:10:01.223 Test: blockdev write read 8 blocks ...[2024-05-12 04:50:08.149890] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:01.223 passed 00:10:01.223 Test: blockdev write read size > 128k ...passed 00:10:01.223 Test: blockdev write read invalid size ...passed 00:10:01.223 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:01.223 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:01.223 Test: blockdev write read max offset ...passed 00:10:01.223 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:01.223 Test: blockdev writev readv 8 blocks ...passed 00:10:01.223 Test: blockdev writev readv 30 x 1block ...passed 00:10:01.223 Test: blockdev writev readv block ...passed 00:10:01.223 Test: blockdev writev readv size > 128k ...passed 00:10:01.223 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:01.223 Test: blockdev comparev and writev ...[2024-05-12 04:50:08.158045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x28af04000 len:0x1000 00:10:01.223 passed 00:10:01.223 Test: blockdev nvme passthru rw ...[2024-05-12 04:50:08.158335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:01.223 passed 00:10:01.223 Test: blockdev nvme passthru vendor specific ...[2024-05-12 04:50:08.159310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:01.223 passed 00:10:01.223 Test: blockdev nvme admin passthru ...[2024-05-12 04:50:08.159551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:01.223 passed 00:10:01.223 Test: blockdev copy ...passed 00:10:01.223 Suite: bdevio tests on: Nvme2n1 00:10:01.223 Test: blockdev write read block ...passed 00:10:01.223 Test: blockdev write zeroes read block ...passed 00:10:01.223 Test: blockdev write zeroes read no split ...passed 00:10:01.223 Test: blockdev write zeroes read split ...passed 00:10:01.223 Test: blockdev write zeroes read split partial ...passed 00:10:01.223 Test: blockdev reset ...[2024-05-12 04:50:08.225317] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:10:01.223 [2024-05-12 04:50:08.229172] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:01.223 passed 00:10:01.223 Test: blockdev write read 8 blocks ...passed 00:10:01.223 Test: blockdev write read size > 128k ...passed 00:10:01.223 Test: blockdev write read invalid size ...passed 00:10:01.223 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:01.223 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:01.223 Test: blockdev write read max offset ...passed 00:10:01.223 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:01.223 Test: blockdev writev readv 8 blocks ...passed 00:10:01.223 Test: blockdev writev readv 30 x 1block ...passed 00:10:01.223 Test: blockdev writev readv block ...passed 00:10:01.223 Test: blockdev writev readv size > 128k ...passed 00:10:01.223 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:01.223 Test: blockdev comparev and writev ...[2024-05-12 04:50:08.237496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x29a83c000 len:0x1000 00:10:01.223 passed 00:10:01.223 Test: blockdev nvme passthru rw ...[2024-05-12 04:50:08.237837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:01.223 passed 00:10:01.223 Test: blockdev nvme passthru vendor specific ...[2024-05-12 04:50:08.238803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:01.223 [2024-05-12 04:50:08.238928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:01.223 passed 00:10:01.223 Test: blockdev nvme admin passthru ...passed 00:10:01.223 Test: blockdev copy ...passed 00:10:01.223 Suite: bdevio tests on: Nvme1n1 00:10:01.223 Test: blockdev write read block ...passed 00:10:01.223 Test: blockdev write zeroes read block ...passed 00:10:01.223 Test: blockdev write zeroes read no split ...passed 00:10:01.223 Test: blockdev write zeroes read split ...passed 00:10:01.223 Test: blockdev write zeroes read split partial ...passed 00:10:01.223 Test: blockdev reset ...[2024-05-12 04:50:08.305760] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:10:01.223 [2024-05-12 04:50:08.309379] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:01.223 passed 00:10:01.223 Test: blockdev write read 8 blocks ...passed 00:10:01.223 Test: blockdev write read size > 128k ...passed 00:10:01.223 Test: blockdev write read invalid size ...passed 00:10:01.223 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:01.223 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:01.223 Test: blockdev write read max offset ...passed 00:10:01.223 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:01.223 Test: blockdev writev readv 8 blocks ...passed 00:10:01.223 Test: blockdev writev readv 30 x 1block ...passed 00:10:01.223 Test: blockdev writev readv block ...passed 00:10:01.223 Test: blockdev writev readv size > 128k ...passed 00:10:01.223 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:01.223 Test: blockdev comparev and writev ...[2024-05-12 04:50:08.318058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x29a838000 len:0x1000 00:10:01.223 passed 00:10:01.223 Test: blockdev nvme passthru rw ...[2024-05-12 04:50:08.318348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:01.223 passed 00:10:01.223 Test: blockdev nvme passthru vendor specific ...[2024-05-12 04:50:08.319287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:01.223 [2024-05-12 04:50:08.319407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:01.223 passed 00:10:01.223 Test: blockdev nvme admin passthru ...passed 00:10:01.223 Test: blockdev copy ...passed 00:10:01.223 Suite: bdevio tests on: Nvme0n1p2 00:10:01.223 Test: blockdev write read block ...passed 00:10:01.223 Test: blockdev write zeroes read block ...passed 00:10:01.223 Test: blockdev write zeroes read no split ...passed 00:10:01.483 Test: blockdev write zeroes read split ...passed 00:10:01.483 Test: blockdev write zeroes read split partial ...passed 00:10:01.483 Test: blockdev reset ...[2024-05-12 04:50:08.387062] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:10:01.483 [2024-05-12 04:50:08.390696] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:01.483 passed 00:10:01.483 Test: blockdev write read 8 blocks ...passed 00:10:01.483 Test: blockdev write read size > 128k ...passed 00:10:01.483 Test: blockdev write read invalid size ...passed 00:10:01.483 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:01.483 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:01.483 Test: blockdev write read max offset ...passed 00:10:01.483 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:01.483 Test: blockdev writev readv 8 blocks ...passed 00:10:01.483 Test: blockdev writev readv 30 x 1block ...passed 00:10:01.483 Test: blockdev writev readv block ...passed 00:10:01.483 Test: blockdev writev readv size > 128k ...passed 00:10:01.483 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:01.483 Test: blockdev comparev and writev ...passed 00:10:01.483 Test: blockdev nvme passthru rw ...passed 00:10:01.483 Test: blockdev nvme passthru vendor specific ...passed 00:10:01.483 Test: blockdev nvme admin passthru ...passed 00:10:01.483 Test: blockdev copy ...[2024-05-12 04:50:08.398409] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p2 since it has 00:10:01.483 separate metadata which is not supported yet. 00:10:01.483 passed 00:10:01.483 Suite: bdevio tests on: Nvme0n1p1 00:10:01.483 Test: blockdev write read block ...passed 00:10:01.483 Test: blockdev write zeroes read block ...passed 00:10:01.483 Test: blockdev write zeroes read no split ...passed 00:10:01.483 Test: blockdev write zeroes read split ...passed 00:10:01.483 Test: blockdev write zeroes read split partial ...passed 00:10:01.483 Test: blockdev reset ...[2024-05-12 04:50:08.454218] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:10:01.483 [2024-05-12 04:50:08.457706] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:01.483 passed 00:10:01.483 Test: blockdev write read 8 blocks ...passed 00:10:01.483 Test: blockdev write read size > 128k ...passed 00:10:01.483 Test: blockdev write read invalid size ...passed 00:10:01.483 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:01.483 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:01.483 Test: blockdev write read max offset ...passed 00:10:01.483 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:01.483 Test: blockdev writev readv 8 blocks ...passed 00:10:01.483 Test: blockdev writev readv 30 x 1block ...passed 00:10:01.483 Test: blockdev writev readv block ...passed 00:10:01.483 Test: blockdev writev readv size > 128k ...passed 00:10:01.483 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:01.483 Test: blockdev comparev and writev ...passed 00:10:01.483 Test: blockdev nvme passthru rw ...passed 00:10:01.483 Test: blockdev nvme passthru vendor specific ...passed 00:10:01.483 Test: blockdev nvme admin passthru ...passed 00:10:01.483 Test: blockdev copy ...[2024-05-12 04:50:08.465795] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p1 since it has 00:10:01.483 separate metadata which is not supported yet. 00:10:01.483 passed 00:10:01.483 00:10:01.483 Run Summary: Type Total Ran Passed Failed Inactive 00:10:01.483 suites 7 7 n/a 0 0 00:10:01.483 tests 161 161 161 0 0 00:10:01.483 asserts 1006 1006 1006 0 n/a 00:10:01.483 00:10:01.483 Elapsed time = 1.449 seconds 00:10:01.483 0 00:10:01.483 04:50:08 -- bdev/blockdev.sh@293 -- # killprocess 63016 00:10:01.483 04:50:08 -- common/autotest_common.sh@926 -- # '[' -z 63016 ']' 00:10:01.483 04:50:08 -- common/autotest_common.sh@930 -- # kill -0 63016 00:10:01.483 04:50:08 -- common/autotest_common.sh@931 -- # uname 00:10:01.483 04:50:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:01.483 04:50:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 63016 00:10:01.483 04:50:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:01.483 04:50:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:01.483 04:50:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 63016' 00:10:01.483 killing process with pid 63016 00:10:01.483 04:50:08 -- common/autotest_common.sh@945 -- # kill 63016 00:10:01.483 04:50:08 -- common/autotest_common.sh@950 -- # wait 63016 00:10:02.420 04:50:09 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:10:02.420 00:10:02.420 real 0m3.352s 00:10:02.420 user 0m8.804s 00:10:02.420 sys 0m0.383s 00:10:02.420 04:50:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:02.420 04:50:09 -- common/autotest_common.sh@10 -- # set +x 00:10:02.420 ************************************ 00:10:02.420 END TEST bdev_bounds 00:10:02.420 ************************************ 00:10:02.420 04:50:09 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:02.420 04:50:09 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:10:02.420 04:50:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:02.420 04:50:09 -- common/autotest_common.sh@10 -- # set +x 00:10:02.420 ************************************ 00:10:02.420 START TEST bdev_nbd 00:10:02.420 ************************************ 00:10:02.420 04:50:09 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:02.420 04:50:09 -- bdev/blockdev.sh@298 -- # uname -s 00:10:02.420 04:50:09 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:10:02.420 04:50:09 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:02.420 04:50:09 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:02.420 04:50:09 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:02.420 04:50:09 -- bdev/blockdev.sh@302 -- # local bdev_all 00:10:02.420 04:50:09 -- bdev/blockdev.sh@303 -- # local bdev_num=7 00:10:02.420 04:50:09 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:10:02.420 04:50:09 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:10:02.420 04:50:09 -- bdev/blockdev.sh@309 -- # local nbd_all 00:10:02.420 04:50:09 -- bdev/blockdev.sh@310 -- # bdev_num=7 00:10:02.420 04:50:09 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:02.420 04:50:09 -- bdev/blockdev.sh@312 -- # local nbd_list 00:10:02.420 04:50:09 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:02.420 04:50:09 -- bdev/blockdev.sh@313 -- # local bdev_list 00:10:02.420 04:50:09 -- bdev/blockdev.sh@316 -- # nbd_pid=63084 00:10:02.420 04:50:09 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:02.420 04:50:09 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:10:02.420 04:50:09 -- bdev/blockdev.sh@318 -- # waitforlisten 63084 /var/tmp/spdk-nbd.sock 00:10:02.420 04:50:09 -- common/autotest_common.sh@819 -- # '[' -z 63084 ']' 00:10:02.420 04:50:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:02.420 04:50:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:02.420 04:50:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:02.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:02.420 04:50:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:02.420 04:50:09 -- common/autotest_common.sh@10 -- # set +x 00:10:02.680 [2024-05-12 04:50:09.561700] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:02.680 [2024-05-12 04:50:09.562045] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:02.680 [2024-05-12 04:50:09.728501] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.938 [2024-05-12 04:50:09.945261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.314 04:50:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:04.314 04:50:11 -- common/autotest_common.sh@852 -- # return 0 00:10:04.314 04:50:11 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:04.314 04:50:11 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:04.314 04:50:11 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:04.314 04:50:11 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:10:04.314 04:50:11 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:04.314 04:50:11 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:04.314 04:50:11 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:04.314 04:50:11 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:10:04.314 04:50:11 -- bdev/nbd_common.sh@24 -- # local i 00:10:04.314 04:50:11 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:10:04.314 04:50:11 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:10:04.314 04:50:11 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:04.315 04:50:11 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:10:04.574 04:50:11 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:10:04.574 04:50:11 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:10:04.574 04:50:11 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:10:04.574 04:50:11 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:10:04.574 04:50:11 -- common/autotest_common.sh@857 -- # local i 00:10:04.574 04:50:11 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:04.574 04:50:11 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:04.574 04:50:11 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:10:04.574 04:50:11 -- common/autotest_common.sh@861 -- # break 00:10:04.574 04:50:11 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:04.574 04:50:11 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:04.574 04:50:11 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:04.574 1+0 records in 00:10:04.574 1+0 records out 00:10:04.574 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000697501 s, 5.9 MB/s 00:10:04.574 04:50:11 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:04.574 04:50:11 -- common/autotest_common.sh@874 -- # size=4096 00:10:04.574 04:50:11 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:04.574 04:50:11 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:04.574 04:50:11 -- common/autotest_common.sh@877 -- # return 0 00:10:04.574 04:50:11 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:04.574 04:50:11 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:04.574 04:50:11 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:10:04.831 04:50:11 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:10:04.831 04:50:11 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:10:04.831 04:50:11 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:10:04.831 04:50:11 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:10:04.831 04:50:11 -- common/autotest_common.sh@857 -- # local i 00:10:04.831 04:50:11 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:04.831 04:50:11 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:04.831 04:50:11 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:10:04.831 04:50:11 -- common/autotest_common.sh@861 -- # break 00:10:04.831 04:50:11 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:04.831 04:50:11 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:04.831 04:50:11 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:04.831 1+0 records in 00:10:04.831 1+0 records out 00:10:04.831 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000443826 s, 9.2 MB/s 00:10:04.832 04:50:11 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:04.832 04:50:11 -- common/autotest_common.sh@874 -- # size=4096 00:10:04.832 04:50:11 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:04.832 04:50:11 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:04.832 04:50:11 -- common/autotest_common.sh@877 -- # return 0 00:10:04.832 04:50:11 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:04.832 04:50:11 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:04.832 04:50:11 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:10:05.089 04:50:12 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:10:05.089 04:50:12 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:10:05.089 04:50:12 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:10:05.089 04:50:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:10:05.089 04:50:12 -- common/autotest_common.sh@857 -- # local i 00:10:05.089 04:50:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:05.089 04:50:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:05.089 04:50:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:10:05.089 04:50:12 -- common/autotest_common.sh@861 -- # break 00:10:05.089 04:50:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:05.089 04:50:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:05.089 04:50:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:05.089 1+0 records in 00:10:05.089 1+0 records out 00:10:05.089 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000568035 s, 7.2 MB/s 00:10:05.089 04:50:12 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:05.089 04:50:12 -- common/autotest_common.sh@874 -- # size=4096 00:10:05.089 04:50:12 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:05.089 04:50:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:05.089 04:50:12 -- common/autotest_common.sh@877 -- # return 0 00:10:05.089 04:50:12 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:05.089 04:50:12 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:05.089 04:50:12 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:10:05.359 04:50:12 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:10:05.359 04:50:12 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:10:05.359 04:50:12 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:10:05.359 04:50:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:10:05.359 04:50:12 -- common/autotest_common.sh@857 -- # local i 00:10:05.359 04:50:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:05.359 04:50:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:05.359 04:50:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:10:05.359 04:50:12 -- common/autotest_common.sh@861 -- # break 00:10:05.359 04:50:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:05.359 04:50:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:05.359 04:50:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:05.359 1+0 records in 00:10:05.359 1+0 records out 00:10:05.359 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000557149 s, 7.4 MB/s 00:10:05.359 04:50:12 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:05.359 04:50:12 -- common/autotest_common.sh@874 -- # size=4096 00:10:05.359 04:50:12 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:05.359 04:50:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:05.359 04:50:12 -- common/autotest_common.sh@877 -- # return 0 00:10:05.359 04:50:12 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:05.359 04:50:12 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:05.359 04:50:12 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:10:05.621 04:50:12 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:10:05.621 04:50:12 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:10:05.621 04:50:12 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:10:05.621 04:50:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:10:05.621 04:50:12 -- common/autotest_common.sh@857 -- # local i 00:10:05.621 04:50:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:05.621 04:50:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:05.621 04:50:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:10:05.621 04:50:12 -- common/autotest_common.sh@861 -- # break 00:10:05.621 04:50:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:05.621 04:50:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:05.621 04:50:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:05.621 1+0 records in 00:10:05.621 1+0 records out 00:10:05.621 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00103124 s, 4.0 MB/s 00:10:05.621 04:50:12 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:05.621 04:50:12 -- common/autotest_common.sh@874 -- # size=4096 00:10:05.621 04:50:12 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:05.621 04:50:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:05.621 04:50:12 -- common/autotest_common.sh@877 -- # return 0 00:10:05.621 04:50:12 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:05.621 04:50:12 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:05.621 04:50:12 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:10:05.880 04:50:12 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:10:05.880 04:50:12 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:10:05.880 04:50:12 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:10:05.880 04:50:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:10:05.880 04:50:12 -- common/autotest_common.sh@857 -- # local i 00:10:05.880 04:50:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:05.880 04:50:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:05.880 04:50:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:10:05.880 04:50:12 -- common/autotest_common.sh@861 -- # break 00:10:05.880 04:50:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:05.880 04:50:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:05.880 04:50:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:05.880 1+0 records in 00:10:05.880 1+0 records out 00:10:05.880 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00101258 s, 4.0 MB/s 00:10:05.880 04:50:12 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:05.880 04:50:12 -- common/autotest_common.sh@874 -- # size=4096 00:10:05.880 04:50:12 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:05.880 04:50:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:05.880 04:50:12 -- common/autotest_common.sh@877 -- # return 0 00:10:05.880 04:50:12 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:05.880 04:50:12 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:05.880 04:50:12 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:10:06.139 04:50:13 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:10:06.139 04:50:13 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:10:06.139 04:50:13 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:10:06.139 04:50:13 -- common/autotest_common.sh@856 -- # local nbd_name=nbd6 00:10:06.139 04:50:13 -- common/autotest_common.sh@857 -- # local i 00:10:06.139 04:50:13 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:06.139 04:50:13 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:06.139 04:50:13 -- common/autotest_common.sh@860 -- # grep -q -w nbd6 /proc/partitions 00:10:06.139 04:50:13 -- common/autotest_common.sh@861 -- # break 00:10:06.139 04:50:13 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:06.139 04:50:13 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:06.139 04:50:13 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:06.139 1+0 records in 00:10:06.139 1+0 records out 00:10:06.139 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000855439 s, 4.8 MB/s 00:10:06.139 04:50:13 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:06.139 04:50:13 -- common/autotest_common.sh@874 -- # size=4096 00:10:06.139 04:50:13 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:06.139 04:50:13 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:06.139 04:50:13 -- common/autotest_common.sh@877 -- # return 0 00:10:06.139 04:50:13 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:06.139 04:50:13 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:06.139 04:50:13 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:06.397 04:50:13 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:10:06.397 { 00:10:06.397 "nbd_device": "/dev/nbd0", 00:10:06.397 "bdev_name": "Nvme0n1p1" 00:10:06.397 }, 00:10:06.397 { 00:10:06.397 "nbd_device": "/dev/nbd1", 00:10:06.397 "bdev_name": "Nvme0n1p2" 00:10:06.397 }, 00:10:06.397 { 00:10:06.397 "nbd_device": "/dev/nbd2", 00:10:06.397 "bdev_name": "Nvme1n1" 00:10:06.397 }, 00:10:06.397 { 00:10:06.397 "nbd_device": "/dev/nbd3", 00:10:06.397 "bdev_name": "Nvme2n1" 00:10:06.397 }, 00:10:06.397 { 00:10:06.397 "nbd_device": "/dev/nbd4", 00:10:06.397 "bdev_name": "Nvme2n2" 00:10:06.397 }, 00:10:06.397 { 00:10:06.397 "nbd_device": "/dev/nbd5", 00:10:06.397 "bdev_name": "Nvme2n3" 00:10:06.397 }, 00:10:06.397 { 00:10:06.397 "nbd_device": "/dev/nbd6", 00:10:06.397 "bdev_name": "Nvme3n1" 00:10:06.397 } 00:10:06.398 ]' 00:10:06.398 04:50:13 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:10:06.398 04:50:13 -- bdev/nbd_common.sh@119 -- # echo '[ 00:10:06.398 { 00:10:06.398 "nbd_device": "/dev/nbd0", 00:10:06.398 "bdev_name": "Nvme0n1p1" 00:10:06.398 }, 00:10:06.398 { 00:10:06.398 "nbd_device": "/dev/nbd1", 00:10:06.398 "bdev_name": "Nvme0n1p2" 00:10:06.398 }, 00:10:06.398 { 00:10:06.398 "nbd_device": "/dev/nbd2", 00:10:06.398 "bdev_name": "Nvme1n1" 00:10:06.398 }, 00:10:06.398 { 00:10:06.398 "nbd_device": "/dev/nbd3", 00:10:06.398 "bdev_name": "Nvme2n1" 00:10:06.398 }, 00:10:06.398 { 00:10:06.398 "nbd_device": "/dev/nbd4", 00:10:06.398 "bdev_name": "Nvme2n2" 00:10:06.398 }, 00:10:06.398 { 00:10:06.398 "nbd_device": "/dev/nbd5", 00:10:06.398 "bdev_name": "Nvme2n3" 00:10:06.398 }, 00:10:06.398 { 00:10:06.398 "nbd_device": "/dev/nbd6", 00:10:06.398 "bdev_name": "Nvme3n1" 00:10:06.398 } 00:10:06.398 ]' 00:10:06.398 04:50:13 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:10:06.398 04:50:13 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:10:06.398 04:50:13 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:06.398 04:50:13 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:10:06.398 04:50:13 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:06.398 04:50:13 -- bdev/nbd_common.sh@51 -- # local i 00:10:06.398 04:50:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:06.398 04:50:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:06.656 04:50:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:06.656 04:50:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:06.656 04:50:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:06.656 04:50:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:06.656 04:50:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:06.656 04:50:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:06.656 04:50:13 -- bdev/nbd_common.sh@41 -- # break 00:10:06.656 04:50:13 -- bdev/nbd_common.sh@45 -- # return 0 00:10:06.656 04:50:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:06.656 04:50:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:06.927 04:50:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:06.927 04:50:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:06.927 04:50:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:06.927 04:50:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:06.927 04:50:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:06.927 04:50:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:06.927 04:50:13 -- bdev/nbd_common.sh@41 -- # break 00:10:06.927 04:50:13 -- bdev/nbd_common.sh@45 -- # return 0 00:10:06.927 04:50:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:06.927 04:50:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:10:07.199 04:50:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:10:07.199 04:50:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:10:07.199 04:50:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:10:07.199 04:50:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:07.199 04:50:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:07.199 04:50:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:10:07.199 04:50:14 -- bdev/nbd_common.sh@41 -- # break 00:10:07.199 04:50:14 -- bdev/nbd_common.sh@45 -- # return 0 00:10:07.199 04:50:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:07.199 04:50:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:10:07.457 04:50:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:10:07.457 04:50:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:10:07.457 04:50:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:10:07.457 04:50:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:07.457 04:50:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:07.457 04:50:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:10:07.457 04:50:14 -- bdev/nbd_common.sh@41 -- # break 00:10:07.457 04:50:14 -- bdev/nbd_common.sh@45 -- # return 0 00:10:07.457 04:50:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:07.457 04:50:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:10:07.715 04:50:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:10:07.715 04:50:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:10:07.715 04:50:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:10:07.715 04:50:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:07.715 04:50:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:07.715 04:50:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:10:07.715 04:50:14 -- bdev/nbd_common.sh@41 -- # break 00:10:07.715 04:50:14 -- bdev/nbd_common.sh@45 -- # return 0 00:10:07.715 04:50:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:07.715 04:50:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:10:07.972 04:50:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:10:07.972 04:50:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:10:07.972 04:50:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:10:07.972 04:50:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:07.972 04:50:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:07.972 04:50:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:10:07.972 04:50:14 -- bdev/nbd_common.sh@41 -- # break 00:10:07.972 04:50:14 -- bdev/nbd_common.sh@45 -- # return 0 00:10:07.972 04:50:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:07.972 04:50:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:10:08.230 04:50:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:10:08.230 04:50:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:10:08.230 04:50:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:10:08.230 04:50:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:08.230 04:50:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:08.230 04:50:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:10:08.230 04:50:15 -- bdev/nbd_common.sh@41 -- # break 00:10:08.230 04:50:15 -- bdev/nbd_common.sh@45 -- # return 0 00:10:08.230 04:50:15 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:08.230 04:50:15 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:08.230 04:50:15 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:08.487 04:50:15 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:08.487 04:50:15 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:08.487 04:50:15 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:08.487 04:50:15 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:08.487 04:50:15 -- bdev/nbd_common.sh@65 -- # echo '' 00:10:08.487 04:50:15 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:08.487 04:50:15 -- bdev/nbd_common.sh@65 -- # true 00:10:08.487 04:50:15 -- bdev/nbd_common.sh@65 -- # count=0 00:10:08.487 04:50:15 -- bdev/nbd_common.sh@66 -- # echo 0 00:10:08.487 04:50:15 -- bdev/nbd_common.sh@122 -- # count=0 00:10:08.487 04:50:15 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:10:08.487 04:50:15 -- bdev/nbd_common.sh@127 -- # return 0 00:10:08.487 04:50:15 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:08.487 04:50:15 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:08.487 04:50:15 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:08.487 04:50:15 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:08.487 04:50:15 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:08.487 04:50:15 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:08.487 04:50:15 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:08.487 04:50:15 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:08.487 04:50:15 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:08.487 04:50:15 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:08.487 04:50:15 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:08.487 04:50:15 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:08.487 04:50:15 -- bdev/nbd_common.sh@12 -- # local i 00:10:08.487 04:50:15 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:08.487 04:50:15 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:08.487 04:50:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:10:08.745 /dev/nbd0 00:10:08.745 04:50:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:08.745 04:50:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:08.745 04:50:15 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:10:08.745 04:50:15 -- common/autotest_common.sh@857 -- # local i 00:10:08.745 04:50:15 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:08.745 04:50:15 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:08.745 04:50:15 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:10:08.745 04:50:15 -- common/autotest_common.sh@861 -- # break 00:10:08.745 04:50:15 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:08.745 04:50:15 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:08.746 04:50:15 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:08.746 1+0 records in 00:10:08.746 1+0 records out 00:10:08.746 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000670495 s, 6.1 MB/s 00:10:08.746 04:50:15 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:08.746 04:50:15 -- common/autotest_common.sh@874 -- # size=4096 00:10:08.746 04:50:15 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:08.746 04:50:15 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:08.746 04:50:15 -- common/autotest_common.sh@877 -- # return 0 00:10:08.746 04:50:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:08.746 04:50:15 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:08.746 04:50:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:10:09.004 /dev/nbd1 00:10:09.004 04:50:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:09.004 04:50:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:09.004 04:50:15 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:10:09.004 04:50:15 -- common/autotest_common.sh@857 -- # local i 00:10:09.004 04:50:15 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:09.004 04:50:15 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:09.004 04:50:15 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:10:09.004 04:50:15 -- common/autotest_common.sh@861 -- # break 00:10:09.004 04:50:15 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:09.004 04:50:15 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:09.004 04:50:15 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:09.004 1+0 records in 00:10:09.004 1+0 records out 00:10:09.004 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000652114 s, 6.3 MB/s 00:10:09.004 04:50:15 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:09.004 04:50:15 -- common/autotest_common.sh@874 -- # size=4096 00:10:09.004 04:50:15 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:09.004 04:50:15 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:09.004 04:50:15 -- common/autotest_common.sh@877 -- # return 0 00:10:09.004 04:50:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:09.004 04:50:15 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:09.004 04:50:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd10 00:10:09.262 /dev/nbd10 00:10:09.262 04:50:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:10:09.262 04:50:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:10:09.262 04:50:16 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:10:09.262 04:50:16 -- common/autotest_common.sh@857 -- # local i 00:10:09.262 04:50:16 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:09.262 04:50:16 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:09.262 04:50:16 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:10:09.262 04:50:16 -- common/autotest_common.sh@861 -- # break 00:10:09.262 04:50:16 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:09.262 04:50:16 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:09.262 04:50:16 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:09.262 1+0 records in 00:10:09.262 1+0 records out 00:10:09.262 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000831468 s, 4.9 MB/s 00:10:09.262 04:50:16 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:09.262 04:50:16 -- common/autotest_common.sh@874 -- # size=4096 00:10:09.262 04:50:16 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:09.262 04:50:16 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:09.262 04:50:16 -- common/autotest_common.sh@877 -- # return 0 00:10:09.262 04:50:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:09.262 04:50:16 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:09.262 04:50:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:10:09.520 /dev/nbd11 00:10:09.520 04:50:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:10:09.520 04:50:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:10:09.520 04:50:16 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:10:09.520 04:50:16 -- common/autotest_common.sh@857 -- # local i 00:10:09.520 04:50:16 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:09.520 04:50:16 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:09.520 04:50:16 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:10:09.520 04:50:16 -- common/autotest_common.sh@861 -- # break 00:10:09.520 04:50:16 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:09.520 04:50:16 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:09.520 04:50:16 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:09.520 1+0 records in 00:10:09.520 1+0 records out 00:10:09.520 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000879695 s, 4.7 MB/s 00:10:09.520 04:50:16 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:09.520 04:50:16 -- common/autotest_common.sh@874 -- # size=4096 00:10:09.520 04:50:16 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:09.520 04:50:16 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:09.520 04:50:16 -- common/autotest_common.sh@877 -- # return 0 00:10:09.520 04:50:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:09.520 04:50:16 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:09.521 04:50:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:10:09.779 /dev/nbd12 00:10:09.779 04:50:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:10:09.779 04:50:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:10:09.779 04:50:16 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:10:09.779 04:50:16 -- common/autotest_common.sh@857 -- # local i 00:10:09.779 04:50:16 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:09.779 04:50:16 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:09.779 04:50:16 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:10:09.779 04:50:16 -- common/autotest_common.sh@861 -- # break 00:10:09.779 04:50:16 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:09.779 04:50:16 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:09.779 04:50:16 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:09.779 1+0 records in 00:10:09.779 1+0 records out 00:10:09.779 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000768682 s, 5.3 MB/s 00:10:09.779 04:50:16 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:09.779 04:50:16 -- common/autotest_common.sh@874 -- # size=4096 00:10:09.779 04:50:16 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:09.779 04:50:16 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:09.779 04:50:16 -- common/autotest_common.sh@877 -- # return 0 00:10:09.779 04:50:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:09.779 04:50:16 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:09.779 04:50:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:10:10.037 /dev/nbd13 00:10:10.037 04:50:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:10:10.037 04:50:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:10:10.037 04:50:17 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:10:10.037 04:50:17 -- common/autotest_common.sh@857 -- # local i 00:10:10.037 04:50:17 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:10.037 04:50:17 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:10.037 04:50:17 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:10:10.037 04:50:17 -- common/autotest_common.sh@861 -- # break 00:10:10.037 04:50:17 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:10.037 04:50:17 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:10.037 04:50:17 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:10.037 1+0 records in 00:10:10.037 1+0 records out 00:10:10.037 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000859349 s, 4.8 MB/s 00:10:10.037 04:50:17 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:10.037 04:50:17 -- common/autotest_common.sh@874 -- # size=4096 00:10:10.038 04:50:17 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:10.038 04:50:17 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:10.038 04:50:17 -- common/autotest_common.sh@877 -- # return 0 00:10:10.038 04:50:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:10.038 04:50:17 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:10.038 04:50:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:10:10.296 /dev/nbd14 00:10:10.296 04:50:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:10:10.296 04:50:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:10:10.296 04:50:17 -- common/autotest_common.sh@856 -- # local nbd_name=nbd14 00:10:10.296 04:50:17 -- common/autotest_common.sh@857 -- # local i 00:10:10.296 04:50:17 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:10.296 04:50:17 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:10.296 04:50:17 -- common/autotest_common.sh@860 -- # grep -q -w nbd14 /proc/partitions 00:10:10.296 04:50:17 -- common/autotest_common.sh@861 -- # break 00:10:10.296 04:50:17 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:10.296 04:50:17 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:10.296 04:50:17 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:10.296 1+0 records in 00:10:10.296 1+0 records out 00:10:10.296 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00308647 s, 1.3 MB/s 00:10:10.296 04:50:17 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:10.296 04:50:17 -- common/autotest_common.sh@874 -- # size=4096 00:10:10.296 04:50:17 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:10.296 04:50:17 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:10.296 04:50:17 -- common/autotest_common.sh@877 -- # return 0 00:10:10.296 04:50:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:10.296 04:50:17 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:10.296 04:50:17 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:10.296 04:50:17 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:10.296 04:50:17 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:10.555 04:50:17 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:10.555 { 00:10:10.555 "nbd_device": "/dev/nbd0", 00:10:10.555 "bdev_name": "Nvme0n1p1" 00:10:10.555 }, 00:10:10.555 { 00:10:10.555 "nbd_device": "/dev/nbd1", 00:10:10.555 "bdev_name": "Nvme0n1p2" 00:10:10.555 }, 00:10:10.555 { 00:10:10.555 "nbd_device": "/dev/nbd10", 00:10:10.555 "bdev_name": "Nvme1n1" 00:10:10.555 }, 00:10:10.555 { 00:10:10.555 "nbd_device": "/dev/nbd11", 00:10:10.555 "bdev_name": "Nvme2n1" 00:10:10.555 }, 00:10:10.555 { 00:10:10.555 "nbd_device": "/dev/nbd12", 00:10:10.555 "bdev_name": "Nvme2n2" 00:10:10.555 }, 00:10:10.555 { 00:10:10.555 "nbd_device": "/dev/nbd13", 00:10:10.555 "bdev_name": "Nvme2n3" 00:10:10.555 }, 00:10:10.555 { 00:10:10.555 "nbd_device": "/dev/nbd14", 00:10:10.555 "bdev_name": "Nvme3n1" 00:10:10.555 } 00:10:10.555 ]' 00:10:10.555 04:50:17 -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:10.555 { 00:10:10.555 "nbd_device": "/dev/nbd0", 00:10:10.556 "bdev_name": "Nvme0n1p1" 00:10:10.556 }, 00:10:10.556 { 00:10:10.556 "nbd_device": "/dev/nbd1", 00:10:10.556 "bdev_name": "Nvme0n1p2" 00:10:10.556 }, 00:10:10.556 { 00:10:10.556 "nbd_device": "/dev/nbd10", 00:10:10.556 "bdev_name": "Nvme1n1" 00:10:10.556 }, 00:10:10.556 { 00:10:10.556 "nbd_device": "/dev/nbd11", 00:10:10.556 "bdev_name": "Nvme2n1" 00:10:10.556 }, 00:10:10.556 { 00:10:10.556 "nbd_device": "/dev/nbd12", 00:10:10.556 "bdev_name": "Nvme2n2" 00:10:10.556 }, 00:10:10.556 { 00:10:10.556 "nbd_device": "/dev/nbd13", 00:10:10.556 "bdev_name": "Nvme2n3" 00:10:10.556 }, 00:10:10.556 { 00:10:10.556 "nbd_device": "/dev/nbd14", 00:10:10.556 "bdev_name": "Nvme3n1" 00:10:10.556 } 00:10:10.556 ]' 00:10:10.556 04:50:17 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:10.556 04:50:17 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:10.556 /dev/nbd1 00:10:10.556 /dev/nbd10 00:10:10.556 /dev/nbd11 00:10:10.556 /dev/nbd12 00:10:10.556 /dev/nbd13 00:10:10.556 /dev/nbd14' 00:10:10.556 04:50:17 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:10.556 /dev/nbd1 00:10:10.556 /dev/nbd10 00:10:10.556 /dev/nbd11 00:10:10.556 /dev/nbd12 00:10:10.556 /dev/nbd13 00:10:10.556 /dev/nbd14' 00:10:10.556 04:50:17 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:10.556 04:50:17 -- bdev/nbd_common.sh@65 -- # count=7 00:10:10.556 04:50:17 -- bdev/nbd_common.sh@66 -- # echo 7 00:10:10.556 04:50:17 -- bdev/nbd_common.sh@95 -- # count=7 00:10:10.556 04:50:17 -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:10:10.556 04:50:17 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:10:10.556 04:50:17 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:10.556 04:50:17 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:10.556 04:50:17 -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:10.556 04:50:17 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:10.556 04:50:17 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:10.556 04:50:17 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:10:10.556 256+0 records in 00:10:10.556 256+0 records out 00:10:10.556 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105501 s, 99.4 MB/s 00:10:10.556 04:50:17 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:10.556 04:50:17 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:10.814 256+0 records in 00:10:10.814 256+0 records out 00:10:10.814 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.201011 s, 5.2 MB/s 00:10:10.814 04:50:17 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:10.814 04:50:17 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:11.073 256+0 records in 00:10:11.073 256+0 records out 00:10:11.073 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.179124 s, 5.9 MB/s 00:10:11.073 04:50:18 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:11.073 04:50:18 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:10:11.331 256+0 records in 00:10:11.331 256+0 records out 00:10:11.331 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.193206 s, 5.4 MB/s 00:10:11.331 04:50:18 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:11.331 04:50:18 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:10:11.331 256+0 records in 00:10:11.331 256+0 records out 00:10:11.331 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.161001 s, 6.5 MB/s 00:10:11.331 04:50:18 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:11.331 04:50:18 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:10:11.590 256+0 records in 00:10:11.590 256+0 records out 00:10:11.590 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.171643 s, 6.1 MB/s 00:10:11.590 04:50:18 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:11.590 04:50:18 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:10:11.849 256+0 records in 00:10:11.849 256+0 records out 00:10:11.849 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.150914 s, 6.9 MB/s 00:10:11.849 04:50:18 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:11.849 04:50:18 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:10:11.849 256+0 records in 00:10:11.849 256+0 records out 00:10:11.849 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.160138 s, 6.5 MB/s 00:10:11.849 04:50:18 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:10:11.849 04:50:18 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:11.849 04:50:18 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:11.849 04:50:18 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:11.849 04:50:18 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:11.849 04:50:18 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:11.849 04:50:18 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:11.849 04:50:18 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:11.849 04:50:18 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:10:11.849 04:50:18 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:11.849 04:50:18 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:10:11.849 04:50:18 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:11.849 04:50:18 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:10:11.849 04:50:18 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:11.849 04:50:18 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:10:11.849 04:50:18 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:11.849 04:50:18 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:10:11.849 04:50:18 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:11.849 04:50:18 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:10:11.849 04:50:18 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:11.849 04:50:18 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:10:11.849 04:50:18 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:11.849 04:50:18 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:11.849 04:50:18 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:11.849 04:50:18 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:11.849 04:50:18 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:11.849 04:50:18 -- bdev/nbd_common.sh@51 -- # local i 00:10:11.849 04:50:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:11.849 04:50:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:12.108 04:50:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:12.108 04:50:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:12.108 04:50:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:12.108 04:50:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:12.108 04:50:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:12.108 04:50:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:12.108 04:50:19 -- bdev/nbd_common.sh@41 -- # break 00:10:12.108 04:50:19 -- bdev/nbd_common.sh@45 -- # return 0 00:10:12.108 04:50:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:12.108 04:50:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:12.366 04:50:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:12.366 04:50:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:12.366 04:50:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:12.366 04:50:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:12.366 04:50:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:12.366 04:50:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:12.366 04:50:19 -- bdev/nbd_common.sh@41 -- # break 00:10:12.366 04:50:19 -- bdev/nbd_common.sh@45 -- # return 0 00:10:12.366 04:50:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:12.366 04:50:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:10:12.625 04:50:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:10:12.625 04:50:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:10:12.626 04:50:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:10:12.626 04:50:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:12.626 04:50:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:12.626 04:50:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:10:12.626 04:50:19 -- bdev/nbd_common.sh@41 -- # break 00:10:12.626 04:50:19 -- bdev/nbd_common.sh@45 -- # return 0 00:10:12.626 04:50:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:12.626 04:50:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:10:12.888 04:50:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:10:12.888 04:50:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:10:12.888 04:50:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:10:12.888 04:50:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:12.888 04:50:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:12.888 04:50:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:10:12.888 04:50:19 -- bdev/nbd_common.sh@41 -- # break 00:10:12.888 04:50:19 -- bdev/nbd_common.sh@45 -- # return 0 00:10:12.888 04:50:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:12.888 04:50:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:10:13.146 04:50:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:10:13.146 04:50:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:10:13.146 04:50:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:10:13.146 04:50:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:13.146 04:50:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:13.146 04:50:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:10:13.146 04:50:20 -- bdev/nbd_common.sh@41 -- # break 00:10:13.146 04:50:20 -- bdev/nbd_common.sh@45 -- # return 0 00:10:13.146 04:50:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:13.146 04:50:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:10:13.405 04:50:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:10:13.405 04:50:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:10:13.405 04:50:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:10:13.405 04:50:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:13.405 04:50:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:13.405 04:50:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:10:13.405 04:50:20 -- bdev/nbd_common.sh@41 -- # break 00:10:13.405 04:50:20 -- bdev/nbd_common.sh@45 -- # return 0 00:10:13.405 04:50:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:13.405 04:50:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:10:13.663 04:50:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:10:13.663 04:50:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:10:13.663 04:50:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:10:13.663 04:50:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:13.663 04:50:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:13.663 04:50:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:10:13.663 04:50:20 -- bdev/nbd_common.sh@41 -- # break 00:10:13.663 04:50:20 -- bdev/nbd_common.sh@45 -- # return 0 00:10:13.663 04:50:20 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:13.663 04:50:20 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:13.663 04:50:20 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:13.921 04:50:20 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:13.921 04:50:20 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:13.922 04:50:20 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:13.922 04:50:20 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:13.922 04:50:20 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:13.922 04:50:20 -- bdev/nbd_common.sh@65 -- # echo '' 00:10:13.922 04:50:20 -- bdev/nbd_common.sh@65 -- # true 00:10:13.922 04:50:20 -- bdev/nbd_common.sh@65 -- # count=0 00:10:13.922 04:50:20 -- bdev/nbd_common.sh@66 -- # echo 0 00:10:13.922 04:50:21 -- bdev/nbd_common.sh@104 -- # count=0 00:10:13.922 04:50:21 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:13.922 04:50:21 -- bdev/nbd_common.sh@109 -- # return 0 00:10:13.922 04:50:21 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:13.922 04:50:21 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:13.922 04:50:21 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:13.922 04:50:21 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:10:13.922 04:50:21 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:10:13.922 04:50:21 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:10:14.180 malloc_lvol_verify 00:10:14.180 04:50:21 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:10:14.439 ceba1af0-e39a-43ba-b9ae-7c4befa395b7 00:10:14.439 04:50:21 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:10:14.697 dc87e04d-072f-4e86-888b-865d96a62967 00:10:14.697 04:50:21 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:10:14.956 /dev/nbd0 00:10:14.956 04:50:21 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:10:14.956 mke2fs 1.46.5 (30-Dec-2021) 00:10:14.956 Discarding device blocks: 0/4096 done 00:10:14.956 Creating filesystem with 4096 1k blocks and 1024 inodes 00:10:14.956 00:10:14.956 Allocating group tables: 0/1 done 00:10:14.956 Writing inode tables: 0/1 done 00:10:14.956 Creating journal (1024 blocks): done 00:10:14.956 Writing superblocks and filesystem accounting information: 0/1 done 00:10:14.956 00:10:14.956 04:50:21 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:10:14.956 04:50:21 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:10:14.956 04:50:21 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:14.956 04:50:21 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:14.956 04:50:21 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:14.956 04:50:21 -- bdev/nbd_common.sh@51 -- # local i 00:10:14.956 04:50:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:14.956 04:50:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:15.214 04:50:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:15.214 04:50:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:15.214 04:50:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:15.214 04:50:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:15.214 04:50:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:15.214 04:50:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:15.214 04:50:22 -- bdev/nbd_common.sh@41 -- # break 00:10:15.214 04:50:22 -- bdev/nbd_common.sh@45 -- # return 0 00:10:15.214 04:50:22 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:10:15.214 04:50:22 -- bdev/nbd_common.sh@147 -- # return 0 00:10:15.214 04:50:22 -- bdev/blockdev.sh@324 -- # killprocess 63084 00:10:15.214 04:50:22 -- common/autotest_common.sh@926 -- # '[' -z 63084 ']' 00:10:15.214 04:50:22 -- common/autotest_common.sh@930 -- # kill -0 63084 00:10:15.214 04:50:22 -- common/autotest_common.sh@931 -- # uname 00:10:15.214 04:50:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:15.214 04:50:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 63084 00:10:15.214 04:50:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:15.214 04:50:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:15.214 04:50:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 63084' 00:10:15.214 killing process with pid 63084 00:10:15.214 04:50:22 -- common/autotest_common.sh@945 -- # kill 63084 00:10:15.214 04:50:22 -- common/autotest_common.sh@950 -- # wait 63084 00:10:16.150 04:50:23 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:10:16.150 00:10:16.150 real 0m13.774s 00:10:16.150 user 0m19.396s 00:10:16.150 sys 0m4.217s 00:10:16.150 04:50:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:16.150 04:50:23 -- common/autotest_common.sh@10 -- # set +x 00:10:16.150 ************************************ 00:10:16.150 END TEST bdev_nbd 00:10:16.150 ************************************ 00:10:16.408 04:50:23 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:10:16.408 04:50:23 -- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']' 00:10:16.408 04:50:23 -- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']' 00:10:16.408 skipping fio tests on NVMe due to multi-ns failures. 00:10:16.408 04:50:23 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:10:16.408 04:50:23 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:16.408 04:50:23 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:16.408 04:50:23 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:10:16.408 04:50:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:16.408 04:50:23 -- common/autotest_common.sh@10 -- # set +x 00:10:16.408 ************************************ 00:10:16.408 START TEST bdev_verify 00:10:16.408 ************************************ 00:10:16.408 04:50:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:16.408 [2024-05-12 04:50:23.404855] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:16.408 [2024-05-12 04:50:23.405040] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63528 ] 00:10:16.667 [2024-05-12 04:50:23.573213] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:16.667 [2024-05-12 04:50:23.730296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.667 [2024-05-12 04:50:23.730316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:17.234 Running I/O for 5 seconds... 00:10:22.538 00:10:22.538 Latency(us) 00:10:22.538 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:22.538 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:22.538 Verification LBA range: start 0x0 length 0x5e800 00:10:22.538 Nvme0n1p1 : 5.05 2347.78 9.17 0.00 0.00 54334.67 7417.48 57671.68 00:10:22.538 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:22.538 Verification LBA range: start 0x5e800 length 0x5e800 00:10:22.538 Nvme0n1p1 : 5.05 2353.35 9.19 0.00 0.00 54242.06 7238.75 58624.93 00:10:22.538 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:22.538 Verification LBA range: start 0x0 length 0x5e7ff 00:10:22.538 Nvme0n1p2 : 5.06 2352.72 9.19 0.00 0.00 54233.79 6106.76 55765.18 00:10:22.538 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:22.538 Verification LBA range: start 0x5e7ff length 0x5e7ff 00:10:22.538 Nvme0n1p2 : 5.06 2352.59 9.19 0.00 0.00 54154.07 7923.90 51713.86 00:10:22.538 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:22.538 Verification LBA range: start 0x0 length 0xa0000 00:10:22.538 Nvme1n1 : 5.06 2351.68 9.19 0.00 0.00 54189.47 7268.54 52905.43 00:10:22.538 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:22.538 Verification LBA range: start 0xa0000 length 0xa0000 00:10:22.538 Nvme1n1 : 5.06 2351.56 9.19 0.00 0.00 54109.60 9055.88 47424.23 00:10:22.538 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:22.538 Verification LBA range: start 0x0 length 0x80000 00:10:22.538 Nvme2n1 : 5.06 2350.63 9.18 0.00 0.00 54109.44 8281.37 46470.98 00:10:22.538 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:22.538 Verification LBA range: start 0x80000 length 0x80000 00:10:22.538 Nvme2n1 : 5.06 2350.49 9.18 0.00 0.00 54077.26 10247.45 46232.67 00:10:22.538 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:22.538 Verification LBA range: start 0x0 length 0x80000 00:10:22.538 Nvme2n2 : 5.06 2349.55 9.18 0.00 0.00 54080.02 9413.35 44802.79 00:10:22.538 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:22.538 Verification LBA range: start 0x80000 length 0x80000 00:10:22.538 Nvme2n2 : 5.06 2349.43 9.18 0.00 0.00 54042.64 11379.43 44564.48 00:10:22.538 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:22.538 Verification LBA range: start 0x0 length 0x80000 00:10:22.538 Nvme2n3 : 5.06 2348.50 9.17 0.00 0.00 54048.87 10545.34 43849.54 00:10:22.538 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:22.538 Verification LBA range: start 0x80000 length 0x80000 00:10:22.538 Nvme2n3 : 5.07 2355.46 9.20 0.00 0.00 53915.35 1355.40 44326.17 00:10:22.538 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:22.538 Verification LBA range: start 0x0 length 0x20000 00:10:22.538 Nvme3n1 : 5.07 2347.84 9.17 0.00 0.00 54013.74 11081.54 43611.23 00:10:22.538 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:22.538 Verification LBA range: start 0x20000 length 0x20000 00:10:22.538 Nvme3n1 : 5.07 2354.56 9.20 0.00 0.00 53881.04 2338.44 44087.85 00:10:22.538 =================================================================================================================== 00:10:22.539 Total : 32916.14 128.58 0.00 0.00 54102.13 1355.40 58624.93 00:10:24.440 00:10:24.440 real 0m8.250s 00:10:24.440 user 0m15.254s 00:10:24.440 sys 0m0.278s 00:10:24.440 04:50:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:24.440 04:50:31 -- common/autotest_common.sh@10 -- # set +x 00:10:24.440 ************************************ 00:10:24.440 END TEST bdev_verify 00:10:24.440 ************************************ 00:10:24.699 04:50:31 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:24.699 04:50:31 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:10:24.699 04:50:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:24.699 04:50:31 -- common/autotest_common.sh@10 -- # set +x 00:10:24.699 ************************************ 00:10:24.699 START TEST bdev_verify_big_io 00:10:24.699 ************************************ 00:10:24.699 04:50:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:24.699 [2024-05-12 04:50:31.695557] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:24.699 [2024-05-12 04:50:31.695744] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63632 ] 00:10:24.958 [2024-05-12 04:50:31.850907] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:24.958 [2024-05-12 04:50:32.016637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.958 [2024-05-12 04:50:32.016653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:25.894 Running I/O for 5 seconds... 00:10:31.163 00:10:31.163 Latency(us) 00:10:31.163 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:31.163 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:31.163 Verification LBA range: start 0x0 length 0x5e80 00:10:31.163 Nvme0n1p1 : 5.40 231.90 14.49 0.00 0.00 536822.91 86269.21 716844.68 00:10:31.163 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:31.163 Verification LBA range: start 0x5e80 length 0x5e80 00:10:31.163 Nvme0n1p1 : 5.41 231.80 14.49 0.00 0.00 537018.43 85792.58 709218.68 00:10:31.163 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:31.163 Verification LBA range: start 0x0 length 0x5e7f 00:10:31.163 Nvme0n1p2 : 5.41 231.80 14.49 0.00 0.00 530110.21 86269.21 663462.63 00:10:31.163 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:31.163 Verification LBA range: start 0x5e7f length 0x5e7f 00:10:31.163 Nvme0n1p2 : 5.41 231.65 14.48 0.00 0.00 530096.52 87222.46 655836.63 00:10:31.163 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:31.163 Verification LBA range: start 0x0 length 0xa000 00:10:31.163 Nvme1n1 : 5.43 238.82 14.93 0.00 0.00 513586.62 20971.52 613893.59 00:10:31.163 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:31.163 Verification LBA range: start 0xa000 length 0xa000 00:10:31.163 Nvme1n1 : 5.43 238.74 14.92 0.00 0.00 513204.16 20614.05 606267.58 00:10:31.163 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:31.163 Verification LBA range: start 0x0 length 0x8000 00:10:31.163 Nvme2n1 : 5.43 238.71 14.92 0.00 0.00 507061.04 21924.77 564324.54 00:10:31.163 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:31.163 Verification LBA range: start 0x8000 length 0x8000 00:10:31.163 Nvme2n1 : 5.43 238.66 14.92 0.00 0.00 506623.92 20971.52 556698.53 00:10:31.163 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:31.163 Verification LBA range: start 0x0 length 0x8000 00:10:31.163 Nvme2n2 : 5.44 247.95 15.50 0.00 0.00 485390.22 4319.42 583389.56 00:10:31.163 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:31.163 Verification LBA range: start 0x8000 length 0x8000 00:10:31.163 Nvme2n2 : 5.45 247.64 15.48 0.00 0.00 485095.29 9711.24 583389.56 00:10:31.163 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:31.163 Verification LBA range: start 0x0 length 0x8000 00:10:31.163 Nvme2n3 : 5.44 247.87 15.49 0.00 0.00 478916.31 4527.94 591015.56 00:10:31.163 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:31.163 Verification LBA range: start 0x8000 length 0x8000 00:10:31.163 Nvme2n3 : 5.45 247.51 15.47 0.00 0.00 478638.19 11021.96 591015.56 00:10:31.163 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:31.163 Verification LBA range: start 0x0 length 0x2000 00:10:31.163 Nvme3n1 : 5.45 255.37 15.96 0.00 0.00 459248.21 2085.24 934185.89 00:10:31.163 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:31.163 Verification LBA range: start 0x2000 length 0x2000 00:10:31.163 Nvme3n1 : 5.46 261.73 16.36 0.00 0.00 447233.16 8043.05 800730.76 00:10:31.163 =================================================================================================================== 00:10:31.163 Total : 3390.17 211.89 0.00 0.00 499524.51 2085.24 934185.89 00:10:33.067 00:10:33.067 real 0m8.170s 00:10:33.067 user 0m15.138s 00:10:33.067 sys 0m0.271s 00:10:33.067 04:50:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:33.067 04:50:39 -- common/autotest_common.sh@10 -- # set +x 00:10:33.067 ************************************ 00:10:33.067 END TEST bdev_verify_big_io 00:10:33.067 ************************************ 00:10:33.067 04:50:39 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:33.067 04:50:39 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:10:33.067 04:50:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:33.067 04:50:39 -- common/autotest_common.sh@10 -- # set +x 00:10:33.067 ************************************ 00:10:33.067 START TEST bdev_write_zeroes 00:10:33.067 ************************************ 00:10:33.067 04:50:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:33.067 [2024-05-12 04:50:39.910972] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:33.067 [2024-05-12 04:50:39.911125] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63741 ] 00:10:33.067 [2024-05-12 04:50:40.067023] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.326 [2024-05-12 04:50:40.222797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.893 Running I/O for 1 seconds... 00:10:34.827 00:10:34.827 Latency(us) 00:10:34.827 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:34.827 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:34.827 Nvme0n1p1 : 1.02 7284.42 28.45 0.00 0.00 17496.98 8043.05 31695.59 00:10:34.827 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:34.827 Nvme0n1p2 : 1.02 7272.34 28.41 0.00 0.00 17491.51 8460.10 31457.28 00:10:34.827 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:34.827 Nvme1n1 : 1.02 7261.28 28.36 0.00 0.00 17453.36 13166.78 29193.31 00:10:34.827 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:34.827 Nvme2n1 : 1.03 7288.56 28.47 0.00 0.00 17330.57 12273.11 24546.21 00:10:34.827 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:34.827 Nvme2n2 : 1.03 7277.46 28.43 0.00 0.00 17284.78 11141.12 22639.71 00:10:34.827 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:34.827 Nvme2n3 : 1.03 7317.19 28.58 0.00 0.00 17184.94 6613.18 21924.77 00:10:34.827 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:34.827 Nvme3n1 : 1.03 7306.06 28.54 0.00 0.00 17166.49 6940.86 20971.52 00:10:34.827 =================================================================================================================== 00:10:34.827 Total : 51007.31 199.25 0.00 0.00 17343.18 6613.18 31695.59 00:10:36.202 00:10:36.202 real 0m3.095s 00:10:36.202 user 0m2.778s 00:10:36.202 sys 0m0.194s 00:10:36.202 04:50:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:36.202 04:50:42 -- common/autotest_common.sh@10 -- # set +x 00:10:36.202 ************************************ 00:10:36.202 END TEST bdev_write_zeroes 00:10:36.202 ************************************ 00:10:36.202 04:50:42 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:36.202 04:50:42 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:10:36.202 04:50:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:36.202 04:50:42 -- common/autotest_common.sh@10 -- # set +x 00:10:36.202 ************************************ 00:10:36.202 START TEST bdev_json_nonenclosed 00:10:36.202 ************************************ 00:10:36.202 04:50:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:36.202 [2024-05-12 04:50:43.076820] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:36.202 [2024-05-12 04:50:43.077018] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63794 ] 00:10:36.202 [2024-05-12 04:50:43.246645] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.461 [2024-05-12 04:50:43.399334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.461 [2024-05-12 04:50:43.399545] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:10:36.461 [2024-05-12 04:50:43.399571] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:36.719 00:10:36.719 real 0m0.744s 00:10:36.719 user 0m0.518s 00:10:36.719 sys 0m0.120s 00:10:36.719 04:50:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:36.719 ************************************ 00:10:36.719 END TEST bdev_json_nonenclosed 00:10:36.719 04:50:43 -- common/autotest_common.sh@10 -- # set +x 00:10:36.719 ************************************ 00:10:36.719 04:50:43 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:36.719 04:50:43 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:10:36.719 04:50:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:36.719 04:50:43 -- common/autotest_common.sh@10 -- # set +x 00:10:36.719 ************************************ 00:10:36.719 START TEST bdev_json_nonarray 00:10:36.719 ************************************ 00:10:36.719 04:50:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:36.978 [2024-05-12 04:50:43.875060] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:36.978 [2024-05-12 04:50:43.875273] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63825 ] 00:10:36.978 [2024-05-12 04:50:44.047298] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.237 [2024-05-12 04:50:44.204234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.237 [2024-05-12 04:50:44.204490] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:10:37.237 [2024-05-12 04:50:44.204518] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:37.495 00:10:37.495 real 0m0.741s 00:10:37.495 user 0m0.496s 00:10:37.495 sys 0m0.140s 00:10:37.495 04:50:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:37.495 04:50:44 -- common/autotest_common.sh@10 -- # set +x 00:10:37.495 ************************************ 00:10:37.495 END TEST bdev_json_nonarray 00:10:37.495 ************************************ 00:10:37.495 04:50:44 -- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]] 00:10:37.495 04:50:44 -- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]] 00:10:37.495 04:50:44 -- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:10:37.495 04:50:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:37.495 04:50:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:37.495 04:50:44 -- common/autotest_common.sh@10 -- # set +x 00:10:37.495 ************************************ 00:10:37.495 START TEST bdev_gpt_uuid 00:10:37.495 ************************************ 00:10:37.495 04:50:44 -- common/autotest_common.sh@1104 -- # bdev_gpt_uuid 00:10:37.495 04:50:44 -- bdev/blockdev.sh@612 -- # local bdev 00:10:37.495 04:50:44 -- bdev/blockdev.sh@614 -- # start_spdk_tgt 00:10:37.495 04:50:44 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=63845 00:10:37.495 04:50:44 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:37.495 04:50:44 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:10:37.495 04:50:44 -- bdev/blockdev.sh@47 -- # waitforlisten 63845 00:10:37.495 04:50:44 -- common/autotest_common.sh@819 -- # '[' -z 63845 ']' 00:10:37.495 04:50:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.496 04:50:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:37.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.496 04:50:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.496 04:50:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:37.496 04:50:44 -- common/autotest_common.sh@10 -- # set +x 00:10:37.755 [2024-05-12 04:50:44.686502] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:37.755 [2024-05-12 04:50:44.686686] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63845 ] 00:10:37.755 [2024-05-12 04:50:44.855755] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.014 [2024-05-12 04:50:45.015009] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:38.014 [2024-05-12 04:50:45.015257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.392 04:50:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:39.392 04:50:46 -- common/autotest_common.sh@852 -- # return 0 00:10:39.392 04:50:46 -- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:39.392 04:50:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:39.392 04:50:46 -- common/autotest_common.sh@10 -- # set +x 00:10:39.651 Some configs were skipped because the RPC state that can call them passed over. 00:10:39.651 04:50:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:39.651 04:50:46 -- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine 00:10:39.651 04:50:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:39.651 04:50:46 -- common/autotest_common.sh@10 -- # set +x 00:10:39.651 04:50:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:39.651 04:50:46 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:10:39.651 04:50:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:39.651 04:50:46 -- common/autotest_common.sh@10 -- # set +x 00:10:39.651 04:50:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:39.651 04:50:46 -- bdev/blockdev.sh@619 -- # bdev='[ 00:10:39.651 { 00:10:39.651 "name": "Nvme0n1p1", 00:10:39.651 "aliases": [ 00:10:39.651 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:10:39.651 ], 00:10:39.651 "product_name": "GPT Disk", 00:10:39.651 "block_size": 4096, 00:10:39.651 "num_blocks": 774144, 00:10:39.651 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:10:39.651 "md_size": 64, 00:10:39.651 "md_interleave": false, 00:10:39.651 "dif_type": 0, 00:10:39.651 "assigned_rate_limits": { 00:10:39.651 "rw_ios_per_sec": 0, 00:10:39.651 "rw_mbytes_per_sec": 0, 00:10:39.651 "r_mbytes_per_sec": 0, 00:10:39.651 "w_mbytes_per_sec": 0 00:10:39.651 }, 00:10:39.651 "claimed": false, 00:10:39.651 "zoned": false, 00:10:39.651 "supported_io_types": { 00:10:39.651 "read": true, 00:10:39.651 "write": true, 00:10:39.651 "unmap": true, 00:10:39.651 "write_zeroes": true, 00:10:39.651 "flush": true, 00:10:39.651 "reset": true, 00:10:39.652 "compare": true, 00:10:39.652 "compare_and_write": false, 00:10:39.652 "abort": true, 00:10:39.652 "nvme_admin": false, 00:10:39.652 "nvme_io": false 00:10:39.652 }, 00:10:39.652 "driver_specific": { 00:10:39.652 "gpt": { 00:10:39.652 "base_bdev": "Nvme0n1", 00:10:39.652 "offset_blocks": 256, 00:10:39.652 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:10:39.652 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:10:39.652 "partition_name": "SPDK_TEST_first" 00:10:39.652 } 00:10:39.652 } 00:10:39.652 } 00:10:39.652 ]' 00:10:39.652 04:50:46 -- bdev/blockdev.sh@620 -- # jq -r length 00:10:39.652 04:50:46 -- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]] 00:10:39.652 04:50:46 -- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]' 00:10:39.652 04:50:46 -- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:10:39.652 04:50:46 -- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:10:39.652 04:50:46 -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:10:39.652 04:50:46 -- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:10:39.652 04:50:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:39.652 04:50:46 -- common/autotest_common.sh@10 -- # set +x 00:10:39.910 04:50:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:39.910 04:50:46 -- bdev/blockdev.sh@624 -- # bdev='[ 00:10:39.911 { 00:10:39.911 "name": "Nvme0n1p2", 00:10:39.911 "aliases": [ 00:10:39.911 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:10:39.911 ], 00:10:39.911 "product_name": "GPT Disk", 00:10:39.911 "block_size": 4096, 00:10:39.911 "num_blocks": 774143, 00:10:39.911 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:10:39.911 "md_size": 64, 00:10:39.911 "md_interleave": false, 00:10:39.911 "dif_type": 0, 00:10:39.911 "assigned_rate_limits": { 00:10:39.911 "rw_ios_per_sec": 0, 00:10:39.911 "rw_mbytes_per_sec": 0, 00:10:39.911 "r_mbytes_per_sec": 0, 00:10:39.911 "w_mbytes_per_sec": 0 00:10:39.911 }, 00:10:39.911 "claimed": false, 00:10:39.911 "zoned": false, 00:10:39.911 "supported_io_types": { 00:10:39.911 "read": true, 00:10:39.911 "write": true, 00:10:39.911 "unmap": true, 00:10:39.911 "write_zeroes": true, 00:10:39.911 "flush": true, 00:10:39.911 "reset": true, 00:10:39.911 "compare": true, 00:10:39.911 "compare_and_write": false, 00:10:39.911 "abort": true, 00:10:39.911 "nvme_admin": false, 00:10:39.911 "nvme_io": false 00:10:39.911 }, 00:10:39.911 "driver_specific": { 00:10:39.911 "gpt": { 00:10:39.911 "base_bdev": "Nvme0n1", 00:10:39.911 "offset_blocks": 774400, 00:10:39.911 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:10:39.911 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:10:39.911 "partition_name": "SPDK_TEST_second" 00:10:39.911 } 00:10:39.911 } 00:10:39.911 } 00:10:39.911 ]' 00:10:39.911 04:50:46 -- bdev/blockdev.sh@625 -- # jq -r length 00:10:39.911 04:50:46 -- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]] 00:10:39.911 04:50:46 -- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]' 00:10:39.911 04:50:46 -- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:10:39.911 04:50:46 -- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:10:39.911 04:50:46 -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:10:39.911 04:50:46 -- bdev/blockdev.sh@629 -- # killprocess 63845 00:10:39.911 04:50:46 -- common/autotest_common.sh@926 -- # '[' -z 63845 ']' 00:10:39.911 04:50:46 -- common/autotest_common.sh@930 -- # kill -0 63845 00:10:39.911 04:50:46 -- common/autotest_common.sh@931 -- # uname 00:10:39.911 04:50:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:39.911 04:50:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 63845 00:10:39.911 04:50:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:39.911 04:50:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:39.911 killing process with pid 63845 00:10:39.911 04:50:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 63845' 00:10:39.911 04:50:46 -- common/autotest_common.sh@945 -- # kill 63845 00:10:39.911 04:50:46 -- common/autotest_common.sh@950 -- # wait 63845 00:10:41.815 00:10:41.815 real 0m4.119s 00:10:41.815 user 0m4.553s 00:10:41.815 sys 0m0.444s 00:10:41.815 04:50:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:41.815 ************************************ 00:10:41.815 04:50:48 -- common/autotest_common.sh@10 -- # set +x 00:10:41.815 END TEST bdev_gpt_uuid 00:10:41.815 ************************************ 00:10:41.815 04:50:48 -- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]] 00:10:41.815 04:50:48 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:10:41.815 04:50:48 -- bdev/blockdev.sh@809 -- # cleanup 00:10:41.815 04:50:48 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:10:41.815 04:50:48 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:41.815 04:50:48 -- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]] 00:10:41.815 04:50:48 -- bdev/blockdev.sh@28 -- # [[ gpt == daos ]] 00:10:41.815 04:50:48 -- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]] 00:10:41.815 04:50:48 -- bdev/blockdev.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:42.384 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:42.384 Waiting for block devices as requested 00:10:42.384 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:10:42.384 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:10:42.642 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:10:42.642 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:10:47.910 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:10:47.910 04:50:54 -- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme2n1 ]] 00:10:47.910 04:50:54 -- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme2n1 00:10:47.910 /dev/nvme2n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:10:47.910 /dev/nvme2n1: 8 bytes were erased at offset 0x17a179000 (gpt): 45 46 49 20 50 41 52 54 00:10:47.910 /dev/nvme2n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:10:47.910 /dev/nvme2n1: calling ioctl to re-read partition table: Success 00:10:47.910 04:50:54 -- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]] 00:10:47.910 00:10:47.910 real 1m4.865s 00:10:47.910 user 1m24.263s 00:10:47.910 sys 0m9.470s 00:10:47.910 04:50:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:47.910 04:50:54 -- common/autotest_common.sh@10 -- # set +x 00:10:47.910 ************************************ 00:10:47.910 END TEST blockdev_nvme_gpt 00:10:47.910 ************************************ 00:10:47.910 04:50:54 -- spdk/autotest.sh@222 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:10:47.910 04:50:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:47.910 04:50:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:47.911 04:50:54 -- common/autotest_common.sh@10 -- # set +x 00:10:47.911 ************************************ 00:10:47.911 START TEST nvme 00:10:47.911 ************************************ 00:10:47.911 04:50:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:10:48.169 * Looking for test storage... 00:10:48.169 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:48.169 04:50:55 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:49.106 lsblk: /dev/nvme0c0n1: not a block device 00:10:49.106 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:49.365 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:10:49.365 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:10:49.365 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:10:49.365 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:10:49.365 04:50:56 -- nvme/nvme.sh@79 -- # uname 00:10:49.365 04:50:56 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:10:49.365 04:50:56 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:10:49.365 04:50:56 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:10:49.365 04:50:56 -- common/autotest_common.sh@1058 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:10:49.365 04:50:56 -- common/autotest_common.sh@1044 -- # _randomize_va_space=2 00:10:49.365 04:50:56 -- common/autotest_common.sh@1045 -- # echo 0 00:10:49.365 04:50:56 -- common/autotest_common.sh@1047 -- # stubpid=64542 00:10:49.365 04:50:56 -- common/autotest_common.sh@1046 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:10:49.365 04:50:56 -- common/autotest_common.sh@1048 -- # echo Waiting for stub to ready for secondary processes... 00:10:49.365 Waiting for stub to ready for secondary processes... 00:10:49.365 04:50:56 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:49.365 04:50:56 -- common/autotest_common.sh@1051 -- # [[ -e /proc/64542 ]] 00:10:49.365 04:50:56 -- common/autotest_common.sh@1052 -- # sleep 1s 00:10:49.623 [2024-05-12 04:50:56.508491] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:49.623 [2024-05-12 04:50:56.508645] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:50.189 [2024-05-12 04:50:57.289173] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:50.446 04:50:57 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:50.446 04:50:57 -- common/autotest_common.sh@1051 -- # [[ -e /proc/64542 ]] 00:10:50.446 04:50:57 -- common/autotest_common.sh@1052 -- # sleep 1s 00:10:50.446 [2024-05-12 04:50:57.504746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:50.446 [2024-05-12 04:50:57.504861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:50.446 [2024-05-12 04:50:57.504874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:50.446 [2024-05-12 04:50:57.527199] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:50.446 [2024-05-12 04:50:57.541531] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:10:50.446 [2024-05-12 04:50:57.541780] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:10:50.446 [2024-05-12 04:50:57.554150] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:50.446 [2024-05-12 04:50:57.554390] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:10:50.446 [2024-05-12 04:50:57.554520] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:10:50.446 [2024-05-12 04:50:57.564006] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:50.446 [2024-05-12 04:50:57.564235] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:10:50.446 [2024-05-12 04:50:57.564391] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:10:50.704 [2024-05-12 04:50:57.574928] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:50.704 [2024-05-12 04:50:57.575140] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:10:50.704 [2024-05-12 04:50:57.575356] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:10:50.704 [2024-05-12 04:50:57.575492] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:10:50.704 [2024-05-12 04:50:57.575663] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:10:51.636 04:50:58 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:51.636 done. 00:10:51.636 04:50:58 -- common/autotest_common.sh@1054 -- # echo done. 00:10:51.636 04:50:58 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:10:51.636 04:50:58 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:10:51.636 04:50:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:51.636 04:50:58 -- common/autotest_common.sh@10 -- # set +x 00:10:51.636 ************************************ 00:10:51.636 START TEST nvme_reset 00:10:51.636 ************************************ 00:10:51.637 04:50:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:10:51.637 Initializing NVMe Controllers 00:10:51.637 Skipping QEMU NVMe SSD at 0000:00:06.0 00:10:51.637 Skipping QEMU NVMe SSD at 0000:00:07.0 00:10:51.637 Skipping QEMU NVMe SSD at 0000:00:09.0 00:10:51.637 Skipping QEMU NVMe SSD at 0000:00:08.0 00:10:51.637 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:10:51.895 00:10:51.895 real 0m0.287s 00:10:51.895 user 0m0.111s 00:10:51.895 sys 0m0.131s 00:10:51.895 04:50:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:51.895 04:50:58 -- common/autotest_common.sh@10 -- # set +x 00:10:51.895 ************************************ 00:10:51.895 END TEST nvme_reset 00:10:51.895 ************************************ 00:10:51.895 04:50:58 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:10:51.895 04:50:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:51.895 04:50:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:51.895 04:50:58 -- common/autotest_common.sh@10 -- # set +x 00:10:51.895 ************************************ 00:10:51.895 START TEST nvme_identify 00:10:51.895 ************************************ 00:10:51.895 04:50:58 -- common/autotest_common.sh@1104 -- # nvme_identify 00:10:51.895 04:50:58 -- nvme/nvme.sh@12 -- # bdfs=() 00:10:51.895 04:50:58 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:10:51.895 04:50:58 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:10:51.895 04:50:58 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:10:51.895 04:50:58 -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:51.895 04:50:58 -- common/autotest_common.sh@1498 -- # local bdfs 00:10:51.895 04:50:58 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:51.895 04:50:58 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:51.895 04:50:58 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:51.895 04:50:58 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:51.895 04:50:58 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:10:51.895 04:50:58 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:10:52.155 [2024-05-12 04:50:59.133547] nvme_ctrlr.c:3471:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:06.0] process 64585 terminated unexpected 00:10:52.155 ===================================================== 00:10:52.155 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:52.155 ===================================================== 00:10:52.155 Controller Capabilities/Features 00:10:52.155 ================================ 00:10:52.155 Vendor ID: 1b36 00:10:52.155 Subsystem Vendor ID: 1af4 00:10:52.155 Serial Number: 12340 00:10:52.155 Model Number: QEMU NVMe Ctrl 00:10:52.155 Firmware Version: 8.0.0 00:10:52.155 Recommended Arb Burst: 6 00:10:52.155 IEEE OUI Identifier: 00 54 52 00:10:52.155 Multi-path I/O 00:10:52.155 May have multiple subsystem ports: No 00:10:52.155 May have multiple controllers: No 00:10:52.155 Associated with SR-IOV VF: No 00:10:52.155 Max Data Transfer Size: 524288 00:10:52.155 Max Number of Namespaces: 256 00:10:52.155 Max Number of I/O Queues: 64 00:10:52.155 NVMe Specification Version (VS): 1.4 00:10:52.155 NVMe Specification Version (Identify): 1.4 00:10:52.155 Maximum Queue Entries: 2048 00:10:52.155 Contiguous Queues Required: Yes 00:10:52.155 Arbitration Mechanisms Supported 00:10:52.155 Weighted Round Robin: Not Supported 00:10:52.155 Vendor Specific: Not Supported 00:10:52.155 Reset Timeout: 7500 ms 00:10:52.156 Doorbell Stride: 4 bytes 00:10:52.156 NVM Subsystem Reset: Not Supported 00:10:52.156 Command Sets Supported 00:10:52.156 NVM Command Set: Supported 00:10:52.156 Boot Partition: Not Supported 00:10:52.156 Memory Page Size Minimum: 4096 bytes 00:10:52.156 Memory Page Size Maximum: 65536 bytes 00:10:52.156 Persistent Memory Region: Not Supported 00:10:52.156 Optional Asynchronous Events Supported 00:10:52.156 Namespace Attribute Notices: Supported 00:10:52.156 Firmware Activation Notices: Not Supported 00:10:52.156 ANA Change Notices: Not Supported 00:10:52.156 PLE Aggregate Log Change Notices: Not Supported 00:10:52.156 LBA Status Info Alert Notices: Not Supported 00:10:52.156 EGE Aggregate Log Change Notices: Not Supported 00:10:52.156 Normal NVM Subsystem Shutdown event: Not Supported 00:10:52.156 Zone Descriptor Change Notices: Not Supported 00:10:52.156 Discovery Log Change Notices: Not Supported 00:10:52.156 Controller Attributes 00:10:52.156 128-bit Host Identifier: Not Supported 00:10:52.156 Non-Operational Permissive Mode: Not Supported 00:10:52.156 NVM Sets: Not Supported 00:10:52.156 Read Recovery Levels: Not Supported 00:10:52.156 Endurance Groups: Not Supported 00:10:52.156 Predictable Latency Mode: Not Supported 00:10:52.156 Traffic Based Keep ALive: Not Supported 00:10:52.156 Namespace Granularity: Not Supported 00:10:52.156 SQ Associations: Not Supported 00:10:52.156 UUID List: Not Supported 00:10:52.156 Multi-Domain Subsystem: Not Supported 00:10:52.156 Fixed Capacity Management: Not Supported 00:10:52.156 Variable Capacity Management: Not Supported 00:10:52.156 Delete Endurance Group: Not Supported 00:10:52.156 Delete NVM Set: Not Supported 00:10:52.156 Extended LBA Formats Supported: Supported 00:10:52.156 Flexible Data Placement Supported: Not Supported 00:10:52.156 00:10:52.156 Controller Memory Buffer Support 00:10:52.156 ================================ 00:10:52.156 Supported: No 00:10:52.156 00:10:52.156 Persistent Memory Region Support 00:10:52.156 ================================ 00:10:52.156 Supported: No 00:10:52.156 00:10:52.156 Admin Command Set Attributes 00:10:52.156 ============================ 00:10:52.156 Security Send/Receive: Not Supported 00:10:52.156 Format NVM: Supported 00:10:52.156 Firmware Activate/Download: Not Supported 00:10:52.156 Namespace Management: Supported 00:10:52.156 Device Self-Test: Not Supported 00:10:52.156 Directives: Supported 00:10:52.156 NVMe-MI: Not Supported 00:10:52.156 Virtualization Management: Not Supported 00:10:52.156 Doorbell Buffer Config: Supported 00:10:52.156 Get LBA Status Capability: Not Supported 00:10:52.156 Command & Feature Lockdown Capability: Not Supported 00:10:52.156 Abort Command Limit: 4 00:10:52.156 Async Event Request Limit: 4 00:10:52.156 Number of Firmware Slots: N/A 00:10:52.156 Firmware Slot 1 Read-Only: N/A 00:10:52.156 Firmware Activation Without Reset: N/A 00:10:52.156 Multiple Update Detection Support: N/A 00:10:52.156 Firmware Update Granularity: No Information Provided 00:10:52.156 Per-Namespace SMART Log: Yes 00:10:52.156 Asymmetric Namespace Access Log Page: Not Supported 00:10:52.156 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:10:52.156 Command Effects Log Page: Supported 00:10:52.156 Get Log Page Extended Data: Supported 00:10:52.156 Telemetry Log Pages: Not Supported 00:10:52.156 Persistent Event Log Pages: Not Supported 00:10:52.156 Supported Log Pages Log Page: May Support 00:10:52.156 Commands Supported & Effects Log Page: Not Supported 00:10:52.156 Feature Identifiers & Effects Log Page:May Support 00:10:52.156 NVMe-MI Commands & Effects Log Page: May Support 00:10:52.156 Data Area 4 for Telemetry Log: Not Supported 00:10:52.156 Error Log Page Entries Supported: 1 00:10:52.156 Keep Alive: Not Supported 00:10:52.156 00:10:52.156 NVM Command Set Attributes 00:10:52.156 ========================== 00:10:52.156 Submission Queue Entry Size 00:10:52.156 Max: 64 00:10:52.156 Min: 64 00:10:52.156 Completion Queue Entry Size 00:10:52.156 Max: 16 00:10:52.156 Min: 16 00:10:52.156 Number of Namespaces: 256 00:10:52.156 Compare Command: Supported 00:10:52.156 Write Uncorrectable Command: Not Supported 00:10:52.156 Dataset Management Command: Supported 00:10:52.156 Write Zeroes Command: Supported 00:10:52.156 Set Features Save Field: Supported 00:10:52.156 Reservations: Not Supported 00:10:52.156 Timestamp: Supported 00:10:52.156 Copy: Supported 00:10:52.156 Volatile Write Cache: Present 00:10:52.156 Atomic Write Unit (Normal): 1 00:10:52.156 Atomic Write Unit (PFail): 1 00:10:52.156 Atomic Compare & Write Unit: 1 00:10:52.156 Fused Compare & Write: Not Supported 00:10:52.156 Scatter-Gather List 00:10:52.156 SGL Command Set: Supported 00:10:52.156 SGL Keyed: Not Supported 00:10:52.156 SGL Bit Bucket Descriptor: Not Supported 00:10:52.156 SGL Metadata Pointer: Not Supported 00:10:52.156 Oversized SGL: Not Supported 00:10:52.156 SGL Metadata Address: Not Supported 00:10:52.156 SGL Offset: Not Supported 00:10:52.156 Transport SGL Data Block: Not Supported 00:10:52.156 Replay Protected Memory Block: Not Supported 00:10:52.156 00:10:52.156 Firmware Slot Information 00:10:52.156 ========================= 00:10:52.156 Active slot: 1 00:10:52.156 Slot 1 Firmware Revision: 1.0 00:10:52.156 00:10:52.156 00:10:52.156 Commands Supported and Effects 00:10:52.156 ============================== 00:10:52.156 Admin Commands 00:10:52.156 -------------- 00:10:52.156 Delete I/O Submission Queue (00h): Supported 00:10:52.156 Create I/O Submission Queue (01h): Supported 00:10:52.156 Get Log Page (02h): Supported 00:10:52.156 Delete I/O Completion Queue (04h): Supported 00:10:52.156 Create I/O Completion Queue (05h): Supported 00:10:52.156 Identify (06h): Supported 00:10:52.156 Abort (08h): Supported 00:10:52.156 Set Features (09h): Supported 00:10:52.156 Get Features (0Ah): Supported 00:10:52.156 Asynchronous Event Request (0Ch): Supported 00:10:52.156 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:52.156 Directive Send (19h): Supported 00:10:52.156 Directive Receive (1Ah): Supported 00:10:52.156 Virtualization Management (1Ch): Supported 00:10:52.156 Doorbell Buffer Config (7Ch): Supported 00:10:52.156 Format NVM (80h): Supported LBA-Change 00:10:52.156 I/O Commands 00:10:52.156 ------------ 00:10:52.156 Flush (00h): Supported LBA-Change 00:10:52.156 Write (01h): Supported LBA-Change 00:10:52.156 Read (02h): Supported 00:10:52.156 Compare (05h): Supported 00:10:52.156 Write Zeroes (08h): Supported LBA-Change 00:10:52.156 Dataset Management (09h): Supported LBA-Change 00:10:52.156 Unknown (0Ch): Supported 00:10:52.156 Unknown (12h): Supported 00:10:52.156 Copy (19h): Supported LBA-Change 00:10:52.156 Unknown (1Dh): Supported LBA-Change 00:10:52.156 00:10:52.156 Error Log 00:10:52.156 ========= 00:10:52.156 00:10:52.156 Arbitration 00:10:52.156 =========== 00:10:52.156 Arbitration Burst: no limit 00:10:52.156 00:10:52.156 Power Management 00:10:52.156 ================ 00:10:52.156 Number of Power States: 1 00:10:52.156 Current Power State: Power State #0 00:10:52.156 Power State #0: 00:10:52.156 Max Power: 25.00 W 00:10:52.156 Non-Operational State: Operational 00:10:52.156 Entry Latency: 16 microseconds 00:10:52.156 Exit Latency: 4 microseconds 00:10:52.156 Relative Read Throughput: 0 00:10:52.156 Relative Read Latency: 0 00:10:52.156 Relative Write Throughput: 0 00:10:52.156 Relative Write Latency: 0 00:10:52.157 Idle Power[2024-05-12 04:50:59.134744] nvme_ctrlr.c:3471:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:07.0] process 64585 terminated unexpected 00:10:52.157 : Not Reported 00:10:52.157 Active Power: Not Reported 00:10:52.157 Non-Operational Permissive Mode: Not Supported 00:10:52.157 00:10:52.157 Health Information 00:10:52.157 ================== 00:10:52.157 Critical Warnings: 00:10:52.157 Available Spare Space: OK 00:10:52.157 Temperature: OK 00:10:52.157 Device Reliability: OK 00:10:52.157 Read Only: No 00:10:52.157 Volatile Memory Backup: OK 00:10:52.157 Current Temperature: 323 Kelvin (50 Celsius) 00:10:52.157 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:52.157 Available Spare: 0% 00:10:52.157 Available Spare Threshold: 0% 00:10:52.157 Life Percentage Used: 0% 00:10:52.157 Data Units Read: 1776 00:10:52.157 Data Units Written: 820 00:10:52.157 Host Read Commands: 87079 00:10:52.157 Host Write Commands: 43236 00:10:52.157 Controller Busy Time: 0 minutes 00:10:52.157 Power Cycles: 0 00:10:52.157 Power On Hours: 0 hours 00:10:52.157 Unsafe Shutdowns: 0 00:10:52.157 Unrecoverable Media Errors: 0 00:10:52.157 Lifetime Error Log Entries: 0 00:10:52.157 Warning Temperature Time: 0 minutes 00:10:52.157 Critical Temperature Time: 0 minutes 00:10:52.157 00:10:52.157 Number of Queues 00:10:52.157 ================ 00:10:52.157 Number of I/O Submission Queues: 64 00:10:52.157 Number of I/O Completion Queues: 64 00:10:52.157 00:10:52.157 ZNS Specific Controller Data 00:10:52.157 ============================ 00:10:52.157 Zone Append Size Limit: 0 00:10:52.157 00:10:52.157 00:10:52.157 Active Namespaces 00:10:52.157 ================= 00:10:52.157 Namespace ID:1 00:10:52.157 Error Recovery Timeout: Unlimited 00:10:52.157 Command Set Identifier: NVM (00h) 00:10:52.157 Deallocate: Supported 00:10:52.157 Deallocated/Unwritten Error: Supported 00:10:52.157 Deallocated Read Value: All 0x00 00:10:52.157 Deallocate in Write Zeroes: Not Supported 00:10:52.157 Deallocated Guard Field: 0xFFFF 00:10:52.157 Flush: Supported 00:10:52.157 Reservation: Not Supported 00:10:52.157 Metadata Transferred as: Separate Metadata Buffer 00:10:52.157 Namespace Sharing Capabilities: Private 00:10:52.157 Size (in LBAs): 1548666 (5GiB) 00:10:52.157 Capacity (in LBAs): 1548666 (5GiB) 00:10:52.157 Utilization (in LBAs): 1548666 (5GiB) 00:10:52.157 Thin Provisioning: Not Supported 00:10:52.157 Per-NS Atomic Units: No 00:10:52.157 Maximum Single Source Range Length: 128 00:10:52.157 Maximum Copy Length: 128 00:10:52.157 Maximum Source Range Count: 128 00:10:52.157 NGUID/EUI64 Never Reused: No 00:10:52.157 Namespace Write Protected: No 00:10:52.157 Number of LBA Formats: 8 00:10:52.157 Current LBA Format: LBA Format #07 00:10:52.157 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:52.157 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:52.157 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:52.157 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:52.157 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:52.157 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:52.157 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:52.157 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:52.157 00:10:52.157 ===================================================== 00:10:52.157 NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:52.157 ===================================================== 00:10:52.157 Controller Capabilities/Features 00:10:52.157 ================================ 00:10:52.157 Vendor ID: 1b36 00:10:52.157 Subsystem Vendor ID: 1af4 00:10:52.157 Serial Number: 12341 00:10:52.157 Model Number: QEMU NVMe Ctrl 00:10:52.157 Firmware Version: 8.0.0 00:10:52.157 Recommended Arb Burst: 6 00:10:52.157 IEEE OUI Identifier: 00 54 52 00:10:52.157 Multi-path I/O 00:10:52.157 May have multiple subsystem ports: No 00:10:52.157 May have multiple controllers: No 00:10:52.157 Associated with SR-IOV VF: No 00:10:52.157 Max Data Transfer Size: 524288 00:10:52.157 Max Number of Namespaces: 256 00:10:52.157 Max Number of I/O Queues: 64 00:10:52.157 NVMe Specification Version (VS): 1.4 00:10:52.157 NVMe Specification Version (Identify): 1.4 00:10:52.157 Maximum Queue Entries: 2048 00:10:52.157 Contiguous Queues Required: Yes 00:10:52.157 Arbitration Mechanisms Supported 00:10:52.157 Weighted Round Robin: Not Supported 00:10:52.157 Vendor Specific: Not Supported 00:10:52.157 Reset Timeout: 7500 ms 00:10:52.157 Doorbell Stride: 4 bytes 00:10:52.157 NVM Subsystem Reset: Not Supported 00:10:52.157 Command Sets Supported 00:10:52.157 NVM Command Set: Supported 00:10:52.157 Boot Partition: Not Supported 00:10:52.157 Memory Page Size Minimum: 4096 bytes 00:10:52.157 Memory Page Size Maximum: 65536 bytes 00:10:52.157 Persistent Memory Region: Not Supported 00:10:52.157 Optional Asynchronous Events Supported 00:10:52.157 Namespace Attribute Notices: Supported 00:10:52.157 Firmware Activation Notices: Not Supported 00:10:52.157 ANA Change Notices: Not Supported 00:10:52.157 PLE Aggregate Log Change Notices: Not Supported 00:10:52.157 LBA Status Info Alert Notices: Not Supported 00:10:52.157 EGE Aggregate Log Change Notices: Not Supported 00:10:52.157 Normal NVM Subsystem Shutdown event: Not Supported 00:10:52.157 Zone Descriptor Change Notices: Not Supported 00:10:52.157 Discovery Log Change Notices: Not Supported 00:10:52.157 Controller Attributes 00:10:52.157 128-bit Host Identifier: Not Supported 00:10:52.157 Non-Operational Permissive Mode: Not Supported 00:10:52.157 NVM Sets: Not Supported 00:10:52.157 Read Recovery Levels: Not Supported 00:10:52.157 Endurance Groups: Not Supported 00:10:52.157 Predictable Latency Mode: Not Supported 00:10:52.157 Traffic Based Keep ALive: Not Supported 00:10:52.157 Namespace Granularity: Not Supported 00:10:52.157 SQ Associations: Not Supported 00:10:52.157 UUID List: Not Supported 00:10:52.157 Multi-Domain Subsystem: Not Supported 00:10:52.157 Fixed Capacity Management: Not Supported 00:10:52.157 Variable Capacity Management: Not Supported 00:10:52.157 Delete Endurance Group: Not Supported 00:10:52.157 Delete NVM Set: Not Supported 00:10:52.157 Extended LBA Formats Supported: Supported 00:10:52.157 Flexible Data Placement Supported: Not Supported 00:10:52.157 00:10:52.157 Controller Memory Buffer Support 00:10:52.157 ================================ 00:10:52.157 Supported: No 00:10:52.157 00:10:52.157 Persistent Memory Region Support 00:10:52.157 ================================ 00:10:52.157 Supported: No 00:10:52.157 00:10:52.157 Admin Command Set Attributes 00:10:52.157 ============================ 00:10:52.157 Security Send/Receive: Not Supported 00:10:52.157 Format NVM: Supported 00:10:52.157 Firmware Activate/Download: Not Supported 00:10:52.157 Namespace Management: Supported 00:10:52.157 Device Self-Test: Not Supported 00:10:52.157 Directives: Supported 00:10:52.157 NVMe-MI: Not Supported 00:10:52.157 Virtualization Management: Not Supported 00:10:52.157 Doorbell Buffer Config: Supported 00:10:52.157 Get LBA Status Capability: Not Supported 00:10:52.157 Command & Feature Lockdown Capability: Not Supported 00:10:52.157 Abort Command Limit: 4 00:10:52.157 Async Event Request Limit: 4 00:10:52.157 Number of Firmware Slots: N/A 00:10:52.157 Firmware Slot 1 Read-Only: N/A 00:10:52.157 Firmware Activation Without Reset: N/A 00:10:52.157 Multiple Update Detection Support: N/A 00:10:52.157 Firmware Update Granularity: No Information Provided 00:10:52.157 Per-Namespace SMART Log: Yes 00:10:52.157 Asymmetric Namespace Access Log Page: Not Supported 00:10:52.157 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:10:52.157 Command Effects Log Page: Supported 00:10:52.157 Get Log Page Extended Data: Supported 00:10:52.157 Telemetry Log Pages: Not Supported 00:10:52.157 Persistent Event Log Pages: Not Supported 00:10:52.157 Supported Log Pages Log Page: May Support 00:10:52.157 Commands Supported & Effects Log Page: Not Supported 00:10:52.157 Feature Identifiers & Effects Log Page:May Support 00:10:52.157 NVMe-MI Commands & Effects Log Page: May Support 00:10:52.157 Data Area 4 for Telemetry Log: Not Supported 00:10:52.157 Error Log Page Entries Supported: 1 00:10:52.157 Keep Alive: Not Supported 00:10:52.157 00:10:52.157 NVM Command Set Attributes 00:10:52.157 ========================== 00:10:52.157 Submission Queue Entry Size 00:10:52.157 Max: 64 00:10:52.157 Min: 64 00:10:52.158 Completion Queue Entry Size 00:10:52.158 Max: 16 00:10:52.158 Min: 16 00:10:52.158 Number of Namespaces: 256 00:10:52.158 Compare Command: Supported 00:10:52.158 Write Uncorrectable Command: Not Supported 00:10:52.158 Dataset Management Command: Supported 00:10:52.158 Write Zeroes Command: Supported 00:10:52.158 Set Features Save Field: Supported 00:10:52.158 Reservations: Not Supported 00:10:52.158 Timestamp: Supported 00:10:52.158 Copy: Supported 00:10:52.158 Volatile Write Cache: Present 00:10:52.158 Atomic Write Unit (Normal): 1 00:10:52.158 Atomic Write Unit (PFail): 1 00:10:52.158 Atomic Compare & Write Unit: 1 00:10:52.158 Fused Compare & Write: Not Supported 00:10:52.158 Scatter-Gather List 00:10:52.158 SGL Command Set: Supported 00:10:52.158 SGL Keyed: Not Supported 00:10:52.158 SGL Bit Bucket Descriptor: Not Supported 00:10:52.158 SGL Metadata Pointer: Not Supported 00:10:52.158 Oversized SGL: Not Supported 00:10:52.158 SGL Metadata Address: Not Supported 00:10:52.158 SGL Offset: Not Supported 00:10:52.158 Transport SGL Data Block: Not Supported 00:10:52.158 Replay Protected Memory Block: Not Supported 00:10:52.158 00:10:52.158 Firmware Slot Information 00:10:52.158 ========================= 00:10:52.158 Active slot: 1 00:10:52.158 Slot 1 Firmware Revision: 1.0 00:10:52.158 00:10:52.158 00:10:52.158 Commands Supported and Effects 00:10:52.158 ============================== 00:10:52.158 Admin Commands 00:10:52.158 -------------- 00:10:52.158 Delete I/O Submission Queue (00h): Supported 00:10:52.158 Create I/O Submission Queue (01h): Supported 00:10:52.158 Get Log Page (02h): Supported 00:10:52.158 Delete I/O Completion Queue (04h): Supported 00:10:52.158 Create I/O Completion Queue (05h): Supported 00:10:52.158 Identify (06h): Supported 00:10:52.158 Abort (08h): Supported 00:10:52.158 Set Features (09h): Supported 00:10:52.158 Get Features (0Ah): Supported 00:10:52.158 Asynchronous Event Request (0Ch): Supported 00:10:52.158 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:52.158 Directive Send (19h): Supported 00:10:52.158 Directive Receive (1Ah): Supported 00:10:52.158 Virtualization Management (1Ch): Supported 00:10:52.158 Doorbell Buffer Config (7Ch): Supported 00:10:52.158 Format NVM (80h): Supported LBA-Change 00:10:52.158 I/O Commands 00:10:52.158 ------------ 00:10:52.158 Flush (00h): Supported LBA-Change 00:10:52.158 Write (01h): Supported LBA-Change 00:10:52.158 Read (02h): Supported 00:10:52.158 Compare (05h): Supported 00:10:52.158 Write Zeroes (08h): Supported LBA-Change 00:10:52.158 Dataset Management (09h): Supported LBA-Change 00:10:52.158 Unknown (0Ch): Supported 00:10:52.158 Unknown (12h): Supported 00:10:52.158 Copy (19h): Supported LBA-Change 00:10:52.158 Unknown (1Dh): Supported LBA-Change 00:10:52.158 00:10:52.158 Error Log 00:10:52.158 ========= 00:10:52.158 00:10:52.158 Arbitration 00:10:52.158 =========== 00:10:52.158 Arbitration Burst: no limit 00:10:52.158 00:10:52.158 Power Management 00:10:52.158 ================ 00:10:52.158 Number of Power States: 1 00:10:52.158 Current Power State: Power State #0 00:10:52.158 Power State #0: 00:10:52.158 Max Power: 25.00 W 00:10:52.158 Non-Operational State: Operational 00:10:52.158 Entry Latency: 16 microseconds 00:10:52.158 Exit Latency: 4 microseconds 00:10:52.158 Relative Read Throughput: 0 00:10:52.158 Relative Read Latency: 0 00:10:52.158 Relative Write Throughput: 0 00:10:52.158 Relative Write Latency: 0 00:10:52.158 Idle Power: Not Reported 00:10:52.158 Active Power: Not Reported 00:10:52.158 Non-Operational Permissive Mode: Not Supported 00:10:52.158 00:10:52.158 Health Information 00:10:52.158 ================== 00:10:52.158 Critical Warnings: 00:10:52.158 Available Spare Space: OK 00:10:52.158 Temperature: OK 00:10:52.158 Device Reliability: OK 00:10:52.158 Read Only: No 00:10:52.158 Volatile Memory Backup: OK 00:10:52.158 Current Temperature: 323 Kelvin (50 Celsius) 00:10:52.158 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:52.158 Available Spare: 0% 00:10:52.158 Available Spare Threshold: 0% 00:10:52.158 Life Percentage Used: 0% 00:10:52.158 Data Units Read: 1223 00:10:52.158 Data Units Written: 568 00:10:52.158 Host Read Commands: 60362 00:10:52.158 Host Write Commands: 29698 00:10:52.158 Controller Busy Time: 0 minutes 00:10:52.158 Power Cycles: 0 00:10:52.158 Power On Hours: 0 hours 00:10:52.158 Unsafe Shutdowns: 0 00:10:52.158 Unrecoverable Media Errors: 0 00:10:52.158 Lifetime Error Log Entries: 0 00:10:52.158 Warning Temperature Time: 0 minutes 00:10:52.158 Critical Temperature Time: 0 minutes 00:10:52.158 00:10:52.158 Number of Queues 00:10:52.158 ================ 00:10:52.158 Number of I/O Submission Queues: 64 00:10:52.158 Number of I/O Completion Queues: 64 00:10:52.158 00:10:52.158 ZNS Specific Controller Data 00:10:52.158 ============================ 00:10:52.158 Zone Append Size Limit: 0 00:10:52.158 00:10:52.158 00:10:52.158 Active Namespaces 00:10:52.158 ================= 00:10:52.158 Namespace ID:1 00:10:52.158 Error Recovery Timeout: Unlimited 00:10:52.158 Command Set Identifier: [2024-05-12 04:50:59.135899] nvme_ctrlr.c:3471:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:09.0] process 64585 terminated unexpected 00:10:52.158 NVM (00h) 00:10:52.158 Deallocate: Supported 00:10:52.158 Deallocated/Unwritten Error: Supported 00:10:52.158 Deallocated Read Value: All 0x00 00:10:52.158 Deallocate in Write Zeroes: Not Supported 00:10:52.158 Deallocated Guard Field: 0xFFFF 00:10:52.158 Flush: Supported 00:10:52.158 Reservation: Not Supported 00:10:52.158 Namespace Sharing Capabilities: Private 00:10:52.158 Size (in LBAs): 1310720 (5GiB) 00:10:52.158 Capacity (in LBAs): 1310720 (5GiB) 00:10:52.158 Utilization (in LBAs): 1310720 (5GiB) 00:10:52.158 Thin Provisioning: Not Supported 00:10:52.158 Per-NS Atomic Units: No 00:10:52.158 Maximum Single Source Range Length: 128 00:10:52.158 Maximum Copy Length: 128 00:10:52.158 Maximum Source Range Count: 128 00:10:52.158 NGUID/EUI64 Never Reused: No 00:10:52.158 Namespace Write Protected: No 00:10:52.158 Number of LBA Formats: 8 00:10:52.158 Current LBA Format: LBA Format #04 00:10:52.158 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:52.158 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:52.158 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:52.158 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:52.158 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:52.158 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:52.158 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:52.158 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:52.158 00:10:52.158 ===================================================== 00:10:52.158 NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:52.158 ===================================================== 00:10:52.158 Controller Capabilities/Features 00:10:52.158 ================================ 00:10:52.158 Vendor ID: 1b36 00:10:52.158 Subsystem Vendor ID: 1af4 00:10:52.158 Serial Number: 12343 00:10:52.158 Model Number: QEMU NVMe Ctrl 00:10:52.158 Firmware Version: 8.0.0 00:10:52.158 Recommended Arb Burst: 6 00:10:52.158 IEEE OUI Identifier: 00 54 52 00:10:52.158 Multi-path I/O 00:10:52.158 May have multiple subsystem ports: No 00:10:52.158 May have multiple controllers: Yes 00:10:52.158 Associated with SR-IOV VF: No 00:10:52.158 Max Data Transfer Size: 524288 00:10:52.158 Max Number of Namespaces: 256 00:10:52.158 Max Number of I/O Queues: 64 00:10:52.158 NVMe Specification Version (VS): 1.4 00:10:52.158 NVMe Specification Version (Identify): 1.4 00:10:52.158 Maximum Queue Entries: 2048 00:10:52.158 Contiguous Queues Required: Yes 00:10:52.158 Arbitration Mechanisms Supported 00:10:52.158 Weighted Round Robin: Not Supported 00:10:52.158 Vendor Specific: Not Supported 00:10:52.158 Reset Timeout: 7500 ms 00:10:52.158 Doorbell Stride: 4 bytes 00:10:52.158 NVM Subsystem Reset: Not Supported 00:10:52.158 Command Sets Supported 00:10:52.158 NVM Command Set: Supported 00:10:52.159 Boot Partition: Not Supported 00:10:52.159 Memory Page Size Minimum: 4096 bytes 00:10:52.159 Memory Page Size Maximum: 65536 bytes 00:10:52.159 Persistent Memory Region: Not Supported 00:10:52.159 Optional Asynchronous Events Supported 00:10:52.159 Namespace Attribute Notices: Supported 00:10:52.159 Firmware Activation Notices: Not Supported 00:10:52.159 ANA Change Notices: Not Supported 00:10:52.159 PLE Aggregate Log Change Notices: Not Supported 00:10:52.159 LBA Status Info Alert Notices: Not Supported 00:10:52.159 EGE Aggregate Log Change Notices: Not Supported 00:10:52.159 Normal NVM Subsystem Shutdown event: Not Supported 00:10:52.159 Zone Descriptor Change Notices: Not Supported 00:10:52.159 Discovery Log Change Notices: Not Supported 00:10:52.159 Controller Attributes 00:10:52.159 128-bit Host Identifier: Not Supported 00:10:52.159 Non-Operational Permissive Mode: Not Supported 00:10:52.159 NVM Sets: Not Supported 00:10:52.159 Read Recovery Levels: Not Supported 00:10:52.159 Endurance Groups: Supported 00:10:52.159 Predictable Latency Mode: Not Supported 00:10:52.159 Traffic Based Keep ALive: Not Supported 00:10:52.159 Namespace Granularity: Not Supported 00:10:52.159 SQ Associations: Not Supported 00:10:52.159 UUID List: Not Supported 00:10:52.159 Multi-Domain Subsystem: Not Supported 00:10:52.159 Fixed Capacity Management: Not Supported 00:10:52.159 Variable Capacity Management: Not Supported 00:10:52.159 Delete Endurance Group: Not Supported 00:10:52.159 Delete NVM Set: Not Supported 00:10:52.159 Extended LBA Formats Supported: Supported 00:10:52.159 Flexible Data Placement Supported: Supported 00:10:52.159 00:10:52.159 Controller Memory Buffer Support 00:10:52.159 ================================ 00:10:52.159 Supported: No 00:10:52.159 00:10:52.159 Persistent Memory Region Support 00:10:52.159 ================================ 00:10:52.159 Supported: No 00:10:52.159 00:10:52.159 Admin Command Set Attributes 00:10:52.159 ============================ 00:10:52.159 Security Send/Receive: Not Supported 00:10:52.159 Format NVM: Supported 00:10:52.159 Firmware Activate/Download: Not Supported 00:10:52.159 Namespace Management: Supported 00:10:52.159 Device Self-Test: Not Supported 00:10:52.159 Directives: Supported 00:10:52.159 NVMe-MI: Not Supported 00:10:52.159 Virtualization Management: Not Supported 00:10:52.159 Doorbell Buffer Config: Supported 00:10:52.159 Get LBA Status Capability: Not Supported 00:10:52.159 Command & Feature Lockdown Capability: Not Supported 00:10:52.159 Abort Command Limit: 4 00:10:52.159 Async Event Request Limit: 4 00:10:52.159 Number of Firmware Slots: N/A 00:10:52.159 Firmware Slot 1 Read-Only: N/A 00:10:52.159 Firmware Activation Without Reset: N/A 00:10:52.159 Multiple Update Detection Support: N/A 00:10:52.159 Firmware Update Granularity: No Information Provided 00:10:52.159 Per-Namespace SMART Log: Yes 00:10:52.159 Asymmetric Namespace Access Log Page: Not Supported 00:10:52.159 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:10:52.159 Command Effects Log Page: Supported 00:10:52.159 Get Log Page Extended Data: Supported 00:10:52.159 Telemetry Log Pages: Not Supported 00:10:52.159 Persistent Event Log Pages: Not Supported 00:10:52.159 Supported Log Pages Log Page: May Support 00:10:52.159 Commands Supported & Effects Log Page: Not Supported 00:10:52.159 Feature Identifiers & Effects Log Page:May Support 00:10:52.159 NVMe-MI Commands & Effects Log Page: May Support 00:10:52.159 Data Area 4 for Telemetry Log: Not Supported 00:10:52.159 Error Log Page Entries Supported: 1 00:10:52.159 Keep Alive: Not Supported 00:10:52.159 00:10:52.159 NVM Command Set Attributes 00:10:52.159 ========================== 00:10:52.159 Submission Queue Entry Size 00:10:52.159 Max: 64 00:10:52.159 Min: 64 00:10:52.159 Completion Queue Entry Size 00:10:52.159 Max: 16 00:10:52.159 Min: 16 00:10:52.159 Number of Namespaces: 256 00:10:52.159 Compare Command: Supported 00:10:52.159 Write Uncorrectable Command: Not Supported 00:10:52.159 Dataset Management Command: Supported 00:10:52.159 Write Zeroes Command: Supported 00:10:52.159 Set Features Save Field: Supported 00:10:52.159 Reservations: Not Supported 00:10:52.159 Timestamp: Supported 00:10:52.159 Copy: Supported 00:10:52.159 Volatile Write Cache: Present 00:10:52.159 Atomic Write Unit (Normal): 1 00:10:52.159 Atomic Write Unit (PFail): 1 00:10:52.159 Atomic Compare & Write Unit: 1 00:10:52.159 Fused Compare & Write: Not Supported 00:10:52.159 Scatter-Gather List 00:10:52.159 SGL Command Set: Supported 00:10:52.159 SGL Keyed: Not Supported 00:10:52.159 SGL Bit Bucket Descriptor: Not Supported 00:10:52.159 SGL Metadata Pointer: Not Supported 00:10:52.159 Oversized SGL: Not Supported 00:10:52.159 SGL Metadata Address: Not Supported 00:10:52.159 SGL Offset: Not Supported 00:10:52.159 Transport SGL Data Block: Not Supported 00:10:52.159 Replay Protected Memory Block: Not Supported 00:10:52.159 00:10:52.159 Firmware Slot Information 00:10:52.159 ========================= 00:10:52.159 Active slot: 1 00:10:52.159 Slot 1 Firmware Revision: 1.0 00:10:52.159 00:10:52.159 00:10:52.159 Commands Supported and Effects 00:10:52.159 ============================== 00:10:52.159 Admin Commands 00:10:52.159 -------------- 00:10:52.159 Delete I/O Submission Queue (00h): Supported 00:10:52.159 Create I/O Submission Queue (01h): Supported 00:10:52.159 Get Log Page (02h): Supported 00:10:52.159 Delete I/O Completion Queue (04h): Supported 00:10:52.159 Create I/O Completion Queue (05h): Supported 00:10:52.159 Identify (06h): Supported 00:10:52.159 Abort (08h): Supported 00:10:52.159 Set Features (09h): Supported 00:10:52.159 Get Features (0Ah): Supported 00:10:52.159 Asynchronous Event Request (0Ch): Supported 00:10:52.159 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:52.159 Directive Send (19h): Supported 00:10:52.159 Directive Receive (1Ah): Supported 00:10:52.159 Virtualization Management (1Ch): Supported 00:10:52.159 Doorbell Buffer Config (7Ch): Supported 00:10:52.159 Format NVM (80h): Supported LBA-Change 00:10:52.159 I/O Commands 00:10:52.159 ------------ 00:10:52.159 Flush (00h): Supported LBA-Change 00:10:52.159 Write (01h): Supported LBA-Change 00:10:52.159 Read (02h): Supported 00:10:52.159 Compare (05h): Supported 00:10:52.159 Write Zeroes (08h): Supported LBA-Change 00:10:52.159 Dataset Management (09h): Supported LBA-Change 00:10:52.159 Unknown (0Ch): Supported 00:10:52.159 Unknown (12h): Supported 00:10:52.159 Copy (19h): Supported LBA-Change 00:10:52.159 Unknown (1Dh): Supported LBA-Change 00:10:52.159 00:10:52.159 Error Log 00:10:52.159 ========= 00:10:52.159 00:10:52.159 Arbitration 00:10:52.159 =========== 00:10:52.159 Arbitration Burst: no limit 00:10:52.159 00:10:52.159 Power Management 00:10:52.159 ================ 00:10:52.159 Number of Power States: 1 00:10:52.159 Current Power State: Power State #0 00:10:52.159 Power State #0: 00:10:52.159 Max Power: 25.00 W 00:10:52.159 Non-Operational State: Operational 00:10:52.159 Entry Latency: 16 microseconds 00:10:52.159 Exit Latency: 4 microseconds 00:10:52.159 Relative Read Throughput: 0 00:10:52.159 Relative Read Latency: 0 00:10:52.159 Relative Write Throughput: 0 00:10:52.159 Relative Write Latency: 0 00:10:52.159 Idle Power: Not Reported 00:10:52.159 Active Power: Not Reported 00:10:52.159 Non-Operational Permissive Mode: Not Supported 00:10:52.159 00:10:52.159 Health Information 00:10:52.159 ================== 00:10:52.159 Critical Warnings: 00:10:52.159 Available Spare Space: OK 00:10:52.159 Temperature: OK 00:10:52.159 Device Reliability: OK 00:10:52.159 Read Only: No 00:10:52.159 Volatile Memory Backup: OK 00:10:52.159 Current Temperature: 323 Kelvin (50 Celsius) 00:10:52.159 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:52.159 Available Spare: 0% 00:10:52.159 Available Spare Threshold: 0% 00:10:52.159 Life Percentage Used: 0% 00:10:52.159 Data Units Read: 1257 00:10:52.159 Data Units Written: 598 00:10:52.159 Host Read Commands: 60231 00:10:52.159 Host Write Commands: 30018 00:10:52.159 Controller Busy Time: 0 minutes 00:10:52.159 Power Cycles: 0 00:10:52.159 Power On Hours: 0 hours 00:10:52.159 Unsafe Shutdowns: 0 00:10:52.159 Unrecoverable Media Errors: 0 00:10:52.159 Lifetime Error Log Entries: 0 00:10:52.159 Warning Temperature Time: 0 minutes 00:10:52.159 Critical Temperature Time: 0 minutes 00:10:52.159 00:10:52.159 Number of Queues 00:10:52.159 ================ 00:10:52.159 Number of I/O Submission Queues: 64 00:10:52.159 Number of I/O Completion Queues: 64 00:10:52.159 00:10:52.159 ZNS Specific Controller Data 00:10:52.159 ============================ 00:10:52.160 Zone Append Size Limit: 0 00:10:52.160 00:10:52.160 00:10:52.160 Active Namespaces 00:10:52.160 ================= 00:10:52.160 Namespace ID:1 00:10:52.160 Error Recovery Timeout: Unlimited 00:10:52.160 Command Set Identifier: NVM (00h) 00:10:52.160 Deallocate: Supported 00:10:52.160 Deallocated/Unwritten Error: Supported 00:10:52.160 Deallocated Read Value: All 0x00 00:10:52.160 Deallocate in Write Zeroes: Not Supported 00:10:52.160 Deallocated Guard Field: 0xFFFF 00:10:52.160 Flush: Supported 00:10:52.160 Reservation: Not Supported 00:10:52.160 Namespace Sharing Capabilities: Multiple Controllers 00:10:52.160 Size (in LBAs): 262144 (1GiB) 00:10:52.160 Capacity (in LBAs): 262144 (1GiB) 00:10:52.160 Utilization (in LBAs): 262144 (1GiB) 00:10:52.160 Thin Provisioning: Not Supported 00:10:52.160 Per-NS Atomic Units: No 00:10:52.160 Maximum Single Source Range Length: 128 00:10:52.160 Maximum Copy Length: 128 00:10:52.160 Maximum Source Range Count: 128 00:10:52.160 NGUID/EUI64 Never Reused: No 00:10:52.160 Namespace Write Protected: No 00:10:52.160 Endurance group ID: 1 00:10:52.160 Number of LBA Formats: 8 00:10:52.160 Current LBA Format: LBA Format #04 00:10:52.160 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:52.160 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:52.160 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:52.160 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:52.160 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:52.160 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:52.160 LBA Format #06: Data Si[2024-05-12 04:50:59.137468] nvme_ctrlr.c:3471:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:08.0] process 64585 terminated unexpected 00:10:52.160 ze: 4096 Metadata Size: 16 00:10:52.160 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:52.160 00:10:52.160 Get Feature FDP: 00:10:52.160 ================ 00:10:52.160 Enabled: Yes 00:10:52.160 FDP configuration index: 0 00:10:52.160 00:10:52.160 FDP configurations log page 00:10:52.160 =========================== 00:10:52.160 Number of FDP configurations: 1 00:10:52.160 Version: 0 00:10:52.160 Size: 112 00:10:52.160 FDP Configuration Descriptor: 0 00:10:52.160 Descriptor Size: 96 00:10:52.160 Reclaim Group Identifier format: 2 00:10:52.160 FDP Volatile Write Cache: Not Present 00:10:52.160 FDP Configuration: Valid 00:10:52.160 Vendor Specific Size: 0 00:10:52.160 Number of Reclaim Groups: 2 00:10:52.160 Number of Recalim Unit Handles: 8 00:10:52.160 Max Placement Identifiers: 128 00:10:52.160 Number of Namespaces Suppprted: 256 00:10:52.160 Reclaim unit Nominal Size: 6000000 bytes 00:10:52.160 Estimated Reclaim Unit Time Limit: Not Reported 00:10:52.160 RUH Desc #000: RUH Type: Initially Isolated 00:10:52.160 RUH Desc #001: RUH Type: Initially Isolated 00:10:52.160 RUH Desc #002: RUH Type: Initially Isolated 00:10:52.160 RUH Desc #003: RUH Type: Initially Isolated 00:10:52.160 RUH Desc #004: RUH Type: Initially Isolated 00:10:52.160 RUH Desc #005: RUH Type: Initially Isolated 00:10:52.160 RUH Desc #006: RUH Type: Initially Isolated 00:10:52.160 RUH Desc #007: RUH Type: Initially Isolated 00:10:52.160 00:10:52.160 FDP reclaim unit handle usage log page 00:10:52.160 ====================================== 00:10:52.160 Number of Reclaim Unit Handles: 8 00:10:52.160 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:52.160 RUH Usage Desc #001: RUH Attributes: Unused 00:10:52.160 RUH Usage Desc #002: RUH Attributes: Unused 00:10:52.160 RUH Usage Desc #003: RUH Attributes: Unused 00:10:52.160 RUH Usage Desc #004: RUH Attributes: Unused 00:10:52.160 RUH Usage Desc #005: RUH Attributes: Unused 00:10:52.160 RUH Usage Desc #006: RUH Attributes: Unused 00:10:52.160 RUH Usage Desc #007: RUH Attributes: Unused 00:10:52.160 00:10:52.160 FDP statistics log page 00:10:52.160 ======================= 00:10:52.160 Host bytes with metadata written: 389713920 00:10:52.160 Media bytes with metadata written: 389758976 00:10:52.160 Media bytes erased: 0 00:10:52.160 00:10:52.160 FDP events log page 00:10:52.160 =================== 00:10:52.160 Number of FDP events: 0 00:10:52.160 00:10:52.160 ===================================================== 00:10:52.160 NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:52.160 ===================================================== 00:10:52.160 Controller Capabilities/Features 00:10:52.160 ================================ 00:10:52.160 Vendor ID: 1b36 00:10:52.160 Subsystem Vendor ID: 1af4 00:10:52.160 Serial Number: 12342 00:10:52.160 Model Number: QEMU NVMe Ctrl 00:10:52.160 Firmware Version: 8.0.0 00:10:52.160 Recommended Arb Burst: 6 00:10:52.160 IEEE OUI Identifier: 00 54 52 00:10:52.160 Multi-path I/O 00:10:52.160 May have multiple subsystem ports: No 00:10:52.160 May have multiple controllers: No 00:10:52.160 Associated with SR-IOV VF: No 00:10:52.160 Max Data Transfer Size: 524288 00:10:52.160 Max Number of Namespaces: 256 00:10:52.160 Max Number of I/O Queues: 64 00:10:52.160 NVMe Specification Version (VS): 1.4 00:10:52.160 NVMe Specification Version (Identify): 1.4 00:10:52.160 Maximum Queue Entries: 2048 00:10:52.160 Contiguous Queues Required: Yes 00:10:52.160 Arbitration Mechanisms Supported 00:10:52.160 Weighted Round Robin: Not Supported 00:10:52.160 Vendor Specific: Not Supported 00:10:52.160 Reset Timeout: 7500 ms 00:10:52.160 Doorbell Stride: 4 bytes 00:10:52.160 NVM Subsystem Reset: Not Supported 00:10:52.160 Command Sets Supported 00:10:52.160 NVM Command Set: Supported 00:10:52.160 Boot Partition: Not Supported 00:10:52.160 Memory Page Size Minimum: 4096 bytes 00:10:52.160 Memory Page Size Maximum: 65536 bytes 00:10:52.160 Persistent Memory Region: Not Supported 00:10:52.160 Optional Asynchronous Events Supported 00:10:52.160 Namespace Attribute Notices: Supported 00:10:52.160 Firmware Activation Notices: Not Supported 00:10:52.160 ANA Change Notices: Not Supported 00:10:52.160 PLE Aggregate Log Change Notices: Not Supported 00:10:52.160 LBA Status Info Alert Notices: Not Supported 00:10:52.160 EGE Aggregate Log Change Notices: Not Supported 00:10:52.160 Normal NVM Subsystem Shutdown event: Not Supported 00:10:52.160 Zone Descriptor Change Notices: Not Supported 00:10:52.160 Discovery Log Change Notices: Not Supported 00:10:52.160 Controller Attributes 00:10:52.160 128-bit Host Identifier: Not Supported 00:10:52.160 Non-Operational Permissive Mode: Not Supported 00:10:52.160 NVM Sets: Not Supported 00:10:52.160 Read Recovery Levels: Not Supported 00:10:52.160 Endurance Groups: Not Supported 00:10:52.160 Predictable Latency Mode: Not Supported 00:10:52.160 Traffic Based Keep ALive: Not Supported 00:10:52.160 Namespace Granularity: Not Supported 00:10:52.160 SQ Associations: Not Supported 00:10:52.160 UUID List: Not Supported 00:10:52.160 Multi-Domain Subsystem: Not Supported 00:10:52.160 Fixed Capacity Management: Not Supported 00:10:52.160 Variable Capacity Management: Not Supported 00:10:52.160 Delete Endurance Group: Not Supported 00:10:52.160 Delete NVM Set: Not Supported 00:10:52.160 Extended LBA Formats Supported: Supported 00:10:52.160 Flexible Data Placement Supported: Not Supported 00:10:52.160 00:10:52.160 Controller Memory Buffer Support 00:10:52.160 ================================ 00:10:52.160 Supported: No 00:10:52.160 00:10:52.160 Persistent Memory Region Support 00:10:52.160 ================================ 00:10:52.160 Supported: No 00:10:52.160 00:10:52.160 Admin Command Set Attributes 00:10:52.160 ============================ 00:10:52.160 Security Send/Receive: Not Supported 00:10:52.160 Format NVM: Supported 00:10:52.160 Firmware Activate/Download: Not Supported 00:10:52.160 Namespace Management: Supported 00:10:52.160 Device Self-Test: Not Supported 00:10:52.160 Directives: Supported 00:10:52.160 NVMe-MI: Not Supported 00:10:52.160 Virtualization Management: Not Supported 00:10:52.160 Doorbell Buffer Config: Supported 00:10:52.160 Get LBA Status Capability: Not Supported 00:10:52.160 Command & Feature Lockdown Capability: Not Supported 00:10:52.160 Abort Command Limit: 4 00:10:52.160 Async Event Request Limit: 4 00:10:52.160 Number of Firmware Slots: N/A 00:10:52.160 Firmware Slot 1 Read-Only: N/A 00:10:52.161 Firmware Activation Without Reset: N/A 00:10:52.161 Multiple Update Detection Support: N/A 00:10:52.161 Firmware Update Granularity: No Information Provided 00:10:52.161 Per-Namespace SMART Log: Yes 00:10:52.161 Asymmetric Namespace Access Log Page: Not Supported 00:10:52.161 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:10:52.161 Command Effects Log Page: Supported 00:10:52.161 Get Log Page Extended Data: Supported 00:10:52.161 Telemetry Log Pages: Not Supported 00:10:52.161 Persistent Event Log Pages: Not Supported 00:10:52.161 Supported Log Pages Log Page: May Support 00:10:52.161 Commands Supported & Effects Log Page: Not Supported 00:10:52.161 Feature Identifiers & Effects Log Page:May Support 00:10:52.161 NVMe-MI Commands & Effects Log Page: May Support 00:10:52.161 Data Area 4 for Telemetry Log: Not Supported 00:10:52.161 Error Log Page Entries Supported: 1 00:10:52.161 Keep Alive: Not Supported 00:10:52.161 00:10:52.161 NVM Command Set Attributes 00:10:52.161 ========================== 00:10:52.161 Submission Queue Entry Size 00:10:52.161 Max: 64 00:10:52.161 Min: 64 00:10:52.161 Completion Queue Entry Size 00:10:52.161 Max: 16 00:10:52.161 Min: 16 00:10:52.161 Number of Namespaces: 256 00:10:52.161 Compare Command: Supported 00:10:52.161 Write Uncorrectable Command: Not Supported 00:10:52.161 Dataset Management Command: Supported 00:10:52.161 Write Zeroes Command: Supported 00:10:52.161 Set Features Save Field: Supported 00:10:52.161 Reservations: Not Supported 00:10:52.161 Timestamp: Supported 00:10:52.161 Copy: Supported 00:10:52.161 Volatile Write Cache: Present 00:10:52.161 Atomic Write Unit (Normal): 1 00:10:52.161 Atomic Write Unit (PFail): 1 00:10:52.161 Atomic Compare & Write Unit: 1 00:10:52.161 Fused Compare & Write: Not Supported 00:10:52.161 Scatter-Gather List 00:10:52.161 SGL Command Set: Supported 00:10:52.161 SGL Keyed: Not Supported 00:10:52.161 SGL Bit Bucket Descriptor: Not Supported 00:10:52.161 SGL Metadata Pointer: Not Supported 00:10:52.161 Oversized SGL: Not Supported 00:10:52.161 SGL Metadata Address: Not Supported 00:10:52.161 SGL Offset: Not Supported 00:10:52.161 Transport SGL Data Block: Not Supported 00:10:52.161 Replay Protected Memory Block: Not Supported 00:10:52.161 00:10:52.161 Firmware Slot Information 00:10:52.161 ========================= 00:10:52.161 Active slot: 1 00:10:52.161 Slot 1 Firmware Revision: 1.0 00:10:52.161 00:10:52.161 00:10:52.161 Commands Supported and Effects 00:10:52.161 ============================== 00:10:52.161 Admin Commands 00:10:52.161 -------------- 00:10:52.161 Delete I/O Submission Queue (00h): Supported 00:10:52.161 Create I/O Submission Queue (01h): Supported 00:10:52.161 Get Log Page (02h): Supported 00:10:52.161 Delete I/O Completion Queue (04h): Supported 00:10:52.161 Create I/O Completion Queue (05h): Supported 00:10:52.161 Identify (06h): Supported 00:10:52.161 Abort (08h): Supported 00:10:52.161 Set Features (09h): Supported 00:10:52.161 Get Features (0Ah): Supported 00:10:52.161 Asynchronous Event Request (0Ch): Supported 00:10:52.161 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:52.161 Directive Send (19h): Supported 00:10:52.161 Directive Receive (1Ah): Supported 00:10:52.161 Virtualization Management (1Ch): Supported 00:10:52.161 Doorbell Buffer Config (7Ch): Supported 00:10:52.161 Format NVM (80h): Supported LBA-Change 00:10:52.161 I/O Commands 00:10:52.161 ------------ 00:10:52.161 Flush (00h): Supported LBA-Change 00:10:52.161 Write (01h): Supported LBA-Change 00:10:52.161 Read (02h): Supported 00:10:52.161 Compare (05h): Supported 00:10:52.161 Write Zeroes (08h): Supported LBA-Change 00:10:52.161 Dataset Management (09h): Supported LBA-Change 00:10:52.161 Unknown (0Ch): Supported 00:10:52.161 Unknown (12h): Supported 00:10:52.161 Copy (19h): Supported LBA-Change 00:10:52.161 Unknown (1Dh): Supported LBA-Change 00:10:52.161 00:10:52.161 Error Log 00:10:52.161 ========= 00:10:52.161 00:10:52.161 Arbitration 00:10:52.161 =========== 00:10:52.161 Arbitration Burst: no limit 00:10:52.161 00:10:52.161 Power Management 00:10:52.161 ================ 00:10:52.161 Number of Power States: 1 00:10:52.161 Current Power State: Power State #0 00:10:52.161 Power State #0: 00:10:52.161 Max Power: 25.00 W 00:10:52.161 Non-Operational State: Operational 00:10:52.161 Entry Latency: 16 microseconds 00:10:52.161 Exit Latency: 4 microseconds 00:10:52.161 Relative Read Throughput: 0 00:10:52.161 Relative Read Latency: 0 00:10:52.161 Relative Write Throughput: 0 00:10:52.161 Relative Write Latency: 0 00:10:52.161 Idle Power: Not Reported 00:10:52.161 Active Power: Not Reported 00:10:52.161 Non-Operational Permissive Mode: Not Supported 00:10:52.161 00:10:52.161 Health Information 00:10:52.161 ================== 00:10:52.161 Critical Warnings: 00:10:52.161 Available Spare Space: OK 00:10:52.161 Temperature: OK 00:10:52.161 Device Reliability: OK 00:10:52.161 Read Only: No 00:10:52.161 Volatile Memory Backup: OK 00:10:52.161 Current Temperature: 323 Kelvin (50 Celsius) 00:10:52.161 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:52.161 Available Spare: 0% 00:10:52.161 Available Spare Threshold: 0% 00:10:52.161 Life Percentage Used: 0% 00:10:52.161 Data Units Read: 3757 00:10:52.161 Data Units Written: 1735 00:10:52.161 Host Read Commands: 182257 00:10:52.161 Host Write Commands: 89515 00:10:52.161 Controller Busy Time: 0 minutes 00:10:52.161 Power Cycles: 0 00:10:52.161 Power On Hours: 0 hours 00:10:52.161 Unsafe Shutdowns: 0 00:10:52.161 Unrecoverable Media Errors: 0 00:10:52.161 Lifetime Error Log Entries: 0 00:10:52.161 Warning Temperature Time: 0 minutes 00:10:52.161 Critical Temperature Time: 0 minutes 00:10:52.161 00:10:52.161 Number of Queues 00:10:52.161 ================ 00:10:52.161 Number of I/O Submission Queues: 64 00:10:52.161 Number of I/O Completion Queues: 64 00:10:52.161 00:10:52.161 ZNS Specific Controller Data 00:10:52.161 ============================ 00:10:52.162 Zone Append Size Limit: 0 00:10:52.162 00:10:52.162 00:10:52.162 Active Namespaces 00:10:52.162 ================= 00:10:52.162 Namespace ID:1 00:10:52.162 Error Recovery Timeout: Unlimited 00:10:52.162 Command Set Identifier: NVM (00h) 00:10:52.162 Deallocate: Supported 00:10:52.162 Deallocated/Unwritten Error: Supported 00:10:52.162 Deallocated Read Value: All 0x00 00:10:52.162 Deallocate in Write Zeroes: Not Supported 00:10:52.162 Deallocated Guard Field: 0xFFFF 00:10:52.162 Flush: Supported 00:10:52.162 Reservation: Not Supported 00:10:52.162 Namespace Sharing Capabilities: Private 00:10:52.162 Size (in LBAs): 1048576 (4GiB) 00:10:52.162 Capacity (in LBAs): 1048576 (4GiB) 00:10:52.162 Utilization (in LBAs): 1048576 (4GiB) 00:10:52.162 Thin Provisioning: Not Supported 00:10:52.162 Per-NS Atomic Units: No 00:10:52.162 Maximum Single Source Range Length: 128 00:10:52.162 Maximum Copy Length: 128 00:10:52.162 Maximum Source Range Count: 128 00:10:52.162 NGUID/EUI64 Never Reused: No 00:10:52.162 Namespace Write Protected: No 00:10:52.162 Number of LBA Formats: 8 00:10:52.162 Current LBA Format: LBA Format #04 00:10:52.162 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:52.162 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:52.162 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:52.162 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:52.162 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:52.162 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:52.162 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:52.162 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:52.162 00:10:52.162 Namespace ID:2 00:10:52.162 Error Recovery Timeout: Unlimited 00:10:52.162 Command Set Identifier: NVM (00h) 00:10:52.162 Deallocate: Supported 00:10:52.162 Deallocated/Unwritten Error: Supported 00:10:52.162 Deallocated Read Value: All 0x00 00:10:52.162 Deallocate in Write Zeroes: Not Supported 00:10:52.162 Deallocated Guard Field: 0xFFFF 00:10:52.162 Flush: Supported 00:10:52.162 Reservation: Not Supported 00:10:52.162 Namespace Sharing Capabilities: Private 00:10:52.162 Size (in LBAs): 1048576 (4GiB) 00:10:52.162 Capacity (in LBAs): 1048576 (4GiB) 00:10:52.162 Utilization (in LBAs): 1048576 (4GiB) 00:10:52.162 Thin Provisioning: Not Supported 00:10:52.162 Per-NS Atomic Units: No 00:10:52.162 Maximum Single Source Range Length: 128 00:10:52.162 Maximum Copy Length: 128 00:10:52.162 Maximum Source Range Count: 128 00:10:52.162 NGUID/EUI64 Never Reused: No 00:10:52.162 Namespace Write Protected: No 00:10:52.162 Number of LBA Formats: 8 00:10:52.162 Current LBA Format: LBA Format #04 00:10:52.162 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:52.162 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:52.162 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:52.162 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:52.162 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:52.162 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:52.162 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:52.162 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:52.162 00:10:52.162 Namespace ID:3 00:10:52.162 Error Recovery Timeout: Unlimited 00:10:52.162 Command Set Identifier: NVM (00h) 00:10:52.162 Deallocate: Supported 00:10:52.162 Deallocated/Unwritten Error: Supported 00:10:52.162 Deallocated Read Value: All 0x00 00:10:52.162 Deallocate in Write Zeroes: Not Supported 00:10:52.162 Deallocated Guard Field: 0xFFFF 00:10:52.162 Flush: Supported 00:10:52.162 Reservation: Not Supported 00:10:52.162 Namespace Sharing Capabilities: Private 00:10:52.162 Size (in LBAs): 1048576 (4GiB) 00:10:52.162 Capacity (in LBAs): 1048576 (4GiB) 00:10:52.162 Utilization (in LBAs): 1048576 (4GiB) 00:10:52.162 Thin Provisioning: Not Supported 00:10:52.162 Per-NS Atomic Units: No 00:10:52.162 Maximum Single Source Range Length: 128 00:10:52.162 Maximum Copy Length: 128 00:10:52.162 Maximum Source Range Count: 128 00:10:52.162 NGUID/EUI64 Never Reused: No 00:10:52.162 Namespace Write Protected: No 00:10:52.162 Number of LBA Formats: 8 00:10:52.162 Current LBA Format: LBA Format #04 00:10:52.162 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:52.162 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:52.162 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:52.162 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:52.162 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:52.162 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:52.162 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:52.162 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:52.162 00:10:52.162 04:50:59 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:52.162 04:50:59 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:10:52.421 ===================================================== 00:10:52.421 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:52.421 ===================================================== 00:10:52.421 Controller Capabilities/Features 00:10:52.421 ================================ 00:10:52.421 Vendor ID: 1b36 00:10:52.421 Subsystem Vendor ID: 1af4 00:10:52.421 Serial Number: 12340 00:10:52.421 Model Number: QEMU NVMe Ctrl 00:10:52.421 Firmware Version: 8.0.0 00:10:52.421 Recommended Arb Burst: 6 00:10:52.421 IEEE OUI Identifier: 00 54 52 00:10:52.421 Multi-path I/O 00:10:52.421 May have multiple subsystem ports: No 00:10:52.421 May have multiple controllers: No 00:10:52.421 Associated with SR-IOV VF: No 00:10:52.421 Max Data Transfer Size: 524288 00:10:52.421 Max Number of Namespaces: 256 00:10:52.421 Max Number of I/O Queues: 64 00:10:52.421 NVMe Specification Version (VS): 1.4 00:10:52.421 NVMe Specification Version (Identify): 1.4 00:10:52.421 Maximum Queue Entries: 2048 00:10:52.421 Contiguous Queues Required: Yes 00:10:52.421 Arbitration Mechanisms Supported 00:10:52.421 Weighted Round Robin: Not Supported 00:10:52.421 Vendor Specific: Not Supported 00:10:52.421 Reset Timeout: 7500 ms 00:10:52.421 Doorbell Stride: 4 bytes 00:10:52.421 NVM Subsystem Reset: Not Supported 00:10:52.421 Command Sets Supported 00:10:52.421 NVM Command Set: Supported 00:10:52.421 Boot Partition: Not Supported 00:10:52.421 Memory Page Size Minimum: 4096 bytes 00:10:52.421 Memory Page Size Maximum: 65536 bytes 00:10:52.421 Persistent Memory Region: Not Supported 00:10:52.421 Optional Asynchronous Events Supported 00:10:52.421 Namespace Attribute Notices: Supported 00:10:52.421 Firmware Activation Notices: Not Supported 00:10:52.421 ANA Change Notices: Not Supported 00:10:52.421 PLE Aggregate Log Change Notices: Not Supported 00:10:52.421 LBA Status Info Alert Notices: Not Supported 00:10:52.421 EGE Aggregate Log Change Notices: Not Supported 00:10:52.421 Normal NVM Subsystem Shutdown event: Not Supported 00:10:52.421 Zone Descriptor Change Notices: Not Supported 00:10:52.421 Discovery Log Change Notices: Not Supported 00:10:52.421 Controller Attributes 00:10:52.421 128-bit Host Identifier: Not Supported 00:10:52.421 Non-Operational Permissive Mode: Not Supported 00:10:52.421 NVM Sets: Not Supported 00:10:52.421 Read Recovery Levels: Not Supported 00:10:52.421 Endurance Groups: Not Supported 00:10:52.421 Predictable Latency Mode: Not Supported 00:10:52.421 Traffic Based Keep ALive: Not Supported 00:10:52.421 Namespace Granularity: Not Supported 00:10:52.421 SQ Associations: Not Supported 00:10:52.421 UUID List: Not Supported 00:10:52.421 Multi-Domain Subsystem: Not Supported 00:10:52.421 Fixed Capacity Management: Not Supported 00:10:52.421 Variable Capacity Management: Not Supported 00:10:52.421 Delete Endurance Group: Not Supported 00:10:52.421 Delete NVM Set: Not Supported 00:10:52.421 Extended LBA Formats Supported: Supported 00:10:52.421 Flexible Data Placement Supported: Not Supported 00:10:52.421 00:10:52.421 Controller Memory Buffer Support 00:10:52.421 ================================ 00:10:52.421 Supported: No 00:10:52.421 00:10:52.421 Persistent Memory Region Support 00:10:52.421 ================================ 00:10:52.421 Supported: No 00:10:52.421 00:10:52.421 Admin Command Set Attributes 00:10:52.421 ============================ 00:10:52.421 Security Send/Receive: Not Supported 00:10:52.421 Format NVM: Supported 00:10:52.421 Firmware Activate/Download: Not Supported 00:10:52.421 Namespace Management: Supported 00:10:52.421 Device Self-Test: Not Supported 00:10:52.421 Directives: Supported 00:10:52.421 NVMe-MI: Not Supported 00:10:52.421 Virtualization Management: Not Supported 00:10:52.421 Doorbell Buffer Config: Supported 00:10:52.421 Get LBA Status Capability: Not Supported 00:10:52.421 Command & Feature Lockdown Capability: Not Supported 00:10:52.421 Abort Command Limit: 4 00:10:52.421 Async Event Request Limit: 4 00:10:52.421 Number of Firmware Slots: N/A 00:10:52.421 Firmware Slot 1 Read-Only: N/A 00:10:52.421 Firmware Activation Without Reset: N/A 00:10:52.421 Multiple Update Detection Support: N/A 00:10:52.421 Firmware Update Granularity: No Information Provided 00:10:52.421 Per-Namespace SMART Log: Yes 00:10:52.421 Asymmetric Namespace Access Log Page: Not Supported 00:10:52.421 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:10:52.421 Command Effects Log Page: Supported 00:10:52.421 Get Log Page Extended Data: Supported 00:10:52.421 Telemetry Log Pages: Not Supported 00:10:52.421 Persistent Event Log Pages: Not Supported 00:10:52.421 Supported Log Pages Log Page: May Support 00:10:52.421 Commands Supported & Effects Log Page: Not Supported 00:10:52.421 Feature Identifiers & Effects Log Page:May Support 00:10:52.421 NVMe-MI Commands & Effects Log Page: May Support 00:10:52.421 Data Area 4 for Telemetry Log: Not Supported 00:10:52.421 Error Log Page Entries Supported: 1 00:10:52.421 Keep Alive: Not Supported 00:10:52.421 00:10:52.421 NVM Command Set Attributes 00:10:52.421 ========================== 00:10:52.421 Submission Queue Entry Size 00:10:52.421 Max: 64 00:10:52.421 Min: 64 00:10:52.421 Completion Queue Entry Size 00:10:52.421 Max: 16 00:10:52.421 Min: 16 00:10:52.421 Number of Namespaces: 256 00:10:52.421 Compare Command: Supported 00:10:52.421 Write Uncorrectable Command: Not Supported 00:10:52.421 Dataset Management Command: Supported 00:10:52.422 Write Zeroes Command: Supported 00:10:52.422 Set Features Save Field: Supported 00:10:52.422 Reservations: Not Supported 00:10:52.422 Timestamp: Supported 00:10:52.422 Copy: Supported 00:10:52.422 Volatile Write Cache: Present 00:10:52.422 Atomic Write Unit (Normal): 1 00:10:52.422 Atomic Write Unit (PFail): 1 00:10:52.422 Atomic Compare & Write Unit: 1 00:10:52.422 Fused Compare & Write: Not Supported 00:10:52.422 Scatter-Gather List 00:10:52.422 SGL Command Set: Supported 00:10:52.422 SGL Keyed: Not Supported 00:10:52.422 SGL Bit Bucket Descriptor: Not Supported 00:10:52.422 SGL Metadata Pointer: Not Supported 00:10:52.422 Oversized SGL: Not Supported 00:10:52.422 SGL Metadata Address: Not Supported 00:10:52.422 SGL Offset: Not Supported 00:10:52.422 Transport SGL Data Block: Not Supported 00:10:52.422 Replay Protected Memory Block: Not Supported 00:10:52.422 00:10:52.422 Firmware Slot Information 00:10:52.422 ========================= 00:10:52.422 Active slot: 1 00:10:52.422 Slot 1 Firmware Revision: 1.0 00:10:52.422 00:10:52.422 00:10:52.422 Commands Supported and Effects 00:10:52.422 ============================== 00:10:52.422 Admin Commands 00:10:52.422 -------------- 00:10:52.422 Delete I/O Submission Queue (00h): Supported 00:10:52.422 Create I/O Submission Queue (01h): Supported 00:10:52.422 Get Log Page (02h): Supported 00:10:52.422 Delete I/O Completion Queue (04h): Supported 00:10:52.422 Create I/O Completion Queue (05h): Supported 00:10:52.422 Identify (06h): Supported 00:10:52.422 Abort (08h): Supported 00:10:52.422 Set Features (09h): Supported 00:10:52.422 Get Features (0Ah): Supported 00:10:52.422 Asynchronous Event Request (0Ch): Supported 00:10:52.422 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:52.422 Directive Send (19h): Supported 00:10:52.422 Directive Receive (1Ah): Supported 00:10:52.422 Virtualization Management (1Ch): Supported 00:10:52.422 Doorbell Buffer Config (7Ch): Supported 00:10:52.422 Format NVM (80h): Supported LBA-Change 00:10:52.422 I/O Commands 00:10:52.422 ------------ 00:10:52.422 Flush (00h): Supported LBA-Change 00:10:52.422 Write (01h): Supported LBA-Change 00:10:52.422 Read (02h): Supported 00:10:52.422 Compare (05h): Supported 00:10:52.422 Write Zeroes (08h): Supported LBA-Change 00:10:52.422 Dataset Management (09h): Supported LBA-Change 00:10:52.422 Unknown (0Ch): Supported 00:10:52.422 Unknown (12h): Supported 00:10:52.422 Copy (19h): Supported LBA-Change 00:10:52.422 Unknown (1Dh): Supported LBA-Change 00:10:52.422 00:10:52.422 Error Log 00:10:52.422 ========= 00:10:52.422 00:10:52.422 Arbitration 00:10:52.422 =========== 00:10:52.422 Arbitration Burst: no limit 00:10:52.422 00:10:52.422 Power Management 00:10:52.422 ================ 00:10:52.422 Number of Power States: 1 00:10:52.422 Current Power State: Power State #0 00:10:52.422 Power State #0: 00:10:52.422 Max Power: 25.00 W 00:10:52.422 Non-Operational State: Operational 00:10:52.422 Entry Latency: 16 microseconds 00:10:52.422 Exit Latency: 4 microseconds 00:10:52.422 Relative Read Throughput: 0 00:10:52.422 Relative Read Latency: 0 00:10:52.422 Relative Write Throughput: 0 00:10:52.422 Relative Write Latency: 0 00:10:52.422 Idle Power: Not Reported 00:10:52.422 Active Power: Not Reported 00:10:52.422 Non-Operational Permissive Mode: Not Supported 00:10:52.422 00:10:52.422 Health Information 00:10:52.422 ================== 00:10:52.422 Critical Warnings: 00:10:52.422 Available Spare Space: OK 00:10:52.422 Temperature: OK 00:10:52.422 Device Reliability: OK 00:10:52.422 Read Only: No 00:10:52.422 Volatile Memory Backup: OK 00:10:52.422 Current Temperature: 323 Kelvin (50 Celsius) 00:10:52.422 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:52.422 Available Spare: 0% 00:10:52.422 Available Spare Threshold: 0% 00:10:52.422 Life Percentage Used: 0% 00:10:52.422 Data Units Read: 1776 00:10:52.422 Data Units Written: 820 00:10:52.422 Host Read Commands: 87079 00:10:52.422 Host Write Commands: 43236 00:10:52.422 Controller Busy Time: 0 minutes 00:10:52.422 Power Cycles: 0 00:10:52.422 Power On Hours: 0 hours 00:10:52.422 Unsafe Shutdowns: 0 00:10:52.422 Unrecoverable Media Errors: 0 00:10:52.422 Lifetime Error Log Entries: 0 00:10:52.422 Warning Temperature Time: 0 minutes 00:10:52.422 Critical Temperature Time: 0 minutes 00:10:52.422 00:10:52.422 Number of Queues 00:10:52.422 ================ 00:10:52.422 Number of I/O Submission Queues: 64 00:10:52.422 Number of I/O Completion Queues: 64 00:10:52.422 00:10:52.422 ZNS Specific Controller Data 00:10:52.422 ============================ 00:10:52.422 Zone Append Size Limit: 0 00:10:52.422 00:10:52.422 00:10:52.422 Active Namespaces 00:10:52.422 ================= 00:10:52.422 Namespace ID:1 00:10:52.422 Error Recovery Timeout: Unlimited 00:10:52.422 Command Set Identifier: NVM (00h) 00:10:52.422 Deallocate: Supported 00:10:52.422 Deallocated/Unwritten Error: Supported 00:10:52.422 Deallocated Read Value: All 0x00 00:10:52.422 Deallocate in Write Zeroes: Not Supported 00:10:52.422 Deallocated Guard Field: 0xFFFF 00:10:52.422 Flush: Supported 00:10:52.422 Reservation: Not Supported 00:10:52.422 Metadata Transferred as: Separate Metadata Buffer 00:10:52.422 Namespace Sharing Capabilities: Private 00:10:52.422 Size (in LBAs): 1548666 (5GiB) 00:10:52.422 Capacity (in LBAs): 1548666 (5GiB) 00:10:52.422 Utilization (in LBAs): 1548666 (5GiB) 00:10:52.422 Thin Provisioning: Not Supported 00:10:52.422 Per-NS Atomic Units: No 00:10:52.422 Maximum Single Source Range Length: 128 00:10:52.422 Maximum Copy Length: 128 00:10:52.422 Maximum Source Range Count: 128 00:10:52.422 NGUID/EUI64 Never Reused: No 00:10:52.422 Namespace Write Protected: No 00:10:52.422 Number of LBA Formats: 8 00:10:52.422 Current LBA Format: LBA Format #07 00:10:52.422 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:52.422 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:52.422 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:52.422 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:52.422 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:52.422 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:52.422 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:52.422 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:52.422 00:10:52.422 04:50:59 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:52.422 04:50:59 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' -i 0 00:10:52.681 ===================================================== 00:10:52.681 NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:52.681 ===================================================== 00:10:52.681 Controller Capabilities/Features 00:10:52.681 ================================ 00:10:52.681 Vendor ID: 1b36 00:10:52.681 Subsystem Vendor ID: 1af4 00:10:52.681 Serial Number: 12341 00:10:52.681 Model Number: QEMU NVMe Ctrl 00:10:52.681 Firmware Version: 8.0.0 00:10:52.681 Recommended Arb Burst: 6 00:10:52.681 IEEE OUI Identifier: 00 54 52 00:10:52.681 Multi-path I/O 00:10:52.681 May have multiple subsystem ports: No 00:10:52.681 May have multiple controllers: No 00:10:52.681 Associated with SR-IOV VF: No 00:10:52.681 Max Data Transfer Size: 524288 00:10:52.681 Max Number of Namespaces: 256 00:10:52.681 Max Number of I/O Queues: 64 00:10:52.681 NVMe Specification Version (VS): 1.4 00:10:52.681 NVMe Specification Version (Identify): 1.4 00:10:52.681 Maximum Queue Entries: 2048 00:10:52.681 Contiguous Queues Required: Yes 00:10:52.681 Arbitration Mechanisms Supported 00:10:52.681 Weighted Round Robin: Not Supported 00:10:52.681 Vendor Specific: Not Supported 00:10:52.681 Reset Timeout: 7500 ms 00:10:52.681 Doorbell Stride: 4 bytes 00:10:52.681 NVM Subsystem Reset: Not Supported 00:10:52.681 Command Sets Supported 00:10:52.681 NVM Command Set: Supported 00:10:52.681 Boot Partition: Not Supported 00:10:52.681 Memory Page Size Minimum: 4096 bytes 00:10:52.681 Memory Page Size Maximum: 65536 bytes 00:10:52.681 Persistent Memory Region: Not Supported 00:10:52.681 Optional Asynchronous Events Supported 00:10:52.681 Namespace Attribute Notices: Supported 00:10:52.681 Firmware Activation Notices: Not Supported 00:10:52.681 ANA Change Notices: Not Supported 00:10:52.681 PLE Aggregate Log Change Notices: Not Supported 00:10:52.681 LBA Status Info Alert Notices: Not Supported 00:10:52.681 EGE Aggregate Log Change Notices: Not Supported 00:10:52.681 Normal NVM Subsystem Shutdown event: Not Supported 00:10:52.681 Zone Descriptor Change Notices: Not Supported 00:10:52.681 Discovery Log Change Notices: Not Supported 00:10:52.681 Controller Attributes 00:10:52.681 128-bit Host Identifier: Not Supported 00:10:52.681 Non-Operational Permissive Mode: Not Supported 00:10:52.681 NVM Sets: Not Supported 00:10:52.681 Read Recovery Levels: Not Supported 00:10:52.681 Endurance Groups: Not Supported 00:10:52.681 Predictable Latency Mode: Not Supported 00:10:52.681 Traffic Based Keep ALive: Not Supported 00:10:52.681 Namespace Granularity: Not Supported 00:10:52.681 SQ Associations: Not Supported 00:10:52.681 UUID List: Not Supported 00:10:52.681 Multi-Domain Subsystem: Not Supported 00:10:52.681 Fixed Capacity Management: Not Supported 00:10:52.681 Variable Capacity Management: Not Supported 00:10:52.681 Delete Endurance Group: Not Supported 00:10:52.681 Delete NVM Set: Not Supported 00:10:52.681 Extended LBA Formats Supported: Supported 00:10:52.681 Flexible Data Placement Supported: Not Supported 00:10:52.681 00:10:52.681 Controller Memory Buffer Support 00:10:52.681 ================================ 00:10:52.681 Supported: No 00:10:52.681 00:10:52.681 Persistent Memory Region Support 00:10:52.681 ================================ 00:10:52.681 Supported: No 00:10:52.681 00:10:52.681 Admin Command Set Attributes 00:10:52.681 ============================ 00:10:52.681 Security Send/Receive: Not Supported 00:10:52.681 Format NVM: Supported 00:10:52.681 Firmware Activate/Download: Not Supported 00:10:52.681 Namespace Management: Supported 00:10:52.681 Device Self-Test: Not Supported 00:10:52.681 Directives: Supported 00:10:52.681 NVMe-MI: Not Supported 00:10:52.681 Virtualization Management: Not Supported 00:10:52.681 Doorbell Buffer Config: Supported 00:10:52.681 Get LBA Status Capability: Not Supported 00:10:52.681 Command & Feature Lockdown Capability: Not Supported 00:10:52.681 Abort Command Limit: 4 00:10:52.681 Async Event Request Limit: 4 00:10:52.681 Number of Firmware Slots: N/A 00:10:52.681 Firmware Slot 1 Read-Only: N/A 00:10:52.682 Firmware Activation Without Reset: N/A 00:10:52.682 Multiple Update Detection Support: N/A 00:10:52.682 Firmware Update Granularity: No Information Provided 00:10:52.682 Per-Namespace SMART Log: Yes 00:10:52.682 Asymmetric Namespace Access Log Page: Not Supported 00:10:52.682 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:10:52.682 Command Effects Log Page: Supported 00:10:52.682 Get Log Page Extended Data: Supported 00:10:52.682 Telemetry Log Pages: Not Supported 00:10:52.682 Persistent Event Log Pages: Not Supported 00:10:52.682 Supported Log Pages Log Page: May Support 00:10:52.682 Commands Supported & Effects Log Page: Not Supported 00:10:52.682 Feature Identifiers & Effects Log Page:May Support 00:10:52.682 NVMe-MI Commands & Effects Log Page: May Support 00:10:52.682 Data Area 4 for Telemetry Log: Not Supported 00:10:52.682 Error Log Page Entries Supported: 1 00:10:52.682 Keep Alive: Not Supported 00:10:52.682 00:10:52.682 NVM Command Set Attributes 00:10:52.682 ========================== 00:10:52.682 Submission Queue Entry Size 00:10:52.682 Max: 64 00:10:52.682 Min: 64 00:10:52.682 Completion Queue Entry Size 00:10:52.682 Max: 16 00:10:52.682 Min: 16 00:10:52.682 Number of Namespaces: 256 00:10:52.682 Compare Command: Supported 00:10:52.682 Write Uncorrectable Command: Not Supported 00:10:52.682 Dataset Management Command: Supported 00:10:52.682 Write Zeroes Command: Supported 00:10:52.682 Set Features Save Field: Supported 00:10:52.682 Reservations: Not Supported 00:10:52.682 Timestamp: Supported 00:10:52.682 Copy: Supported 00:10:52.682 Volatile Write Cache: Present 00:10:52.682 Atomic Write Unit (Normal): 1 00:10:52.682 Atomic Write Unit (PFail): 1 00:10:52.682 Atomic Compare & Write Unit: 1 00:10:52.682 Fused Compare & Write: Not Supported 00:10:52.682 Scatter-Gather List 00:10:52.682 SGL Command Set: Supported 00:10:52.682 SGL Keyed: Not Supported 00:10:52.682 SGL Bit Bucket Descriptor: Not Supported 00:10:52.682 SGL Metadata Pointer: Not Supported 00:10:52.682 Oversized SGL: Not Supported 00:10:52.682 SGL Metadata Address: Not Supported 00:10:52.682 SGL Offset: Not Supported 00:10:52.682 Transport SGL Data Block: Not Supported 00:10:52.682 Replay Protected Memory Block: Not Supported 00:10:52.682 00:10:52.682 Firmware Slot Information 00:10:52.682 ========================= 00:10:52.682 Active slot: 1 00:10:52.682 Slot 1 Firmware Revision: 1.0 00:10:52.682 00:10:52.682 00:10:52.682 Commands Supported and Effects 00:10:52.682 ============================== 00:10:52.682 Admin Commands 00:10:52.682 -------------- 00:10:52.682 Delete I/O Submission Queue (00h): Supported 00:10:52.682 Create I/O Submission Queue (01h): Supported 00:10:52.682 Get Log Page (02h): Supported 00:10:52.682 Delete I/O Completion Queue (04h): Supported 00:10:52.682 Create I/O Completion Queue (05h): Supported 00:10:52.682 Identify (06h): Supported 00:10:52.682 Abort (08h): Supported 00:10:52.682 Set Features (09h): Supported 00:10:52.682 Get Features (0Ah): Supported 00:10:52.682 Asynchronous Event Request (0Ch): Supported 00:10:52.682 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:52.682 Directive Send (19h): Supported 00:10:52.682 Directive Receive (1Ah): Supported 00:10:52.682 Virtualization Management (1Ch): Supported 00:10:52.682 Doorbell Buffer Config (7Ch): Supported 00:10:52.682 Format NVM (80h): Supported LBA-Change 00:10:52.682 I/O Commands 00:10:52.682 ------------ 00:10:52.682 Flush (00h): Supported LBA-Change 00:10:52.682 Write (01h): Supported LBA-Change 00:10:52.682 Read (02h): Supported 00:10:52.682 Compare (05h): Supported 00:10:52.682 Write Zeroes (08h): Supported LBA-Change 00:10:52.682 Dataset Management (09h): Supported LBA-Change 00:10:52.682 Unknown (0Ch): Supported 00:10:52.682 Unknown (12h): Supported 00:10:52.682 Copy (19h): Supported LBA-Change 00:10:52.682 Unknown (1Dh): Supported LBA-Change 00:10:52.682 00:10:52.682 Error Log 00:10:52.682 ========= 00:10:52.682 00:10:52.682 Arbitration 00:10:52.682 =========== 00:10:52.682 Arbitration Burst: no limit 00:10:52.682 00:10:52.682 Power Management 00:10:52.682 ================ 00:10:52.682 Number of Power States: 1 00:10:52.682 Current Power State: Power State #0 00:10:52.682 Power State #0: 00:10:52.682 Max Power: 25.00 W 00:10:52.682 Non-Operational State: Operational 00:10:52.682 Entry Latency: 16 microseconds 00:10:52.682 Exit Latency: 4 microseconds 00:10:52.682 Relative Read Throughput: 0 00:10:52.682 Relative Read Latency: 0 00:10:52.682 Relative Write Throughput: 0 00:10:52.682 Relative Write Latency: 0 00:10:52.682 Idle Power: Not Reported 00:10:52.682 Active Power: Not Reported 00:10:52.682 Non-Operational Permissive Mode: Not Supported 00:10:52.682 00:10:52.682 Health Information 00:10:52.682 ================== 00:10:52.682 Critical Warnings: 00:10:52.682 Available Spare Space: OK 00:10:52.682 Temperature: OK 00:10:52.682 Device Reliability: OK 00:10:52.682 Read Only: No 00:10:52.682 Volatile Memory Backup: OK 00:10:52.682 Current Temperature: 323 Kelvin (50 Celsius) 00:10:52.682 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:52.682 Available Spare: 0% 00:10:52.682 Available Spare Threshold: 0% 00:10:52.682 Life Percentage Used: 0% 00:10:52.682 Data Units Read: 1223 00:10:52.682 Data Units Written: 568 00:10:52.682 Host Read Commands: 60362 00:10:52.682 Host Write Commands: 29698 00:10:52.682 Controller Busy Time: 0 minutes 00:10:52.682 Power Cycles: 0 00:10:52.682 Power On Hours: 0 hours 00:10:52.682 Unsafe Shutdowns: 0 00:10:52.682 Unrecoverable Media Errors: 0 00:10:52.682 Lifetime Error Log Entries: 0 00:10:52.682 Warning Temperature Time: 0 minutes 00:10:52.682 Critical Temperature Time: 0 minutes 00:10:52.682 00:10:52.682 Number of Queues 00:10:52.682 ================ 00:10:52.682 Number of I/O Submission Queues: 64 00:10:52.682 Number of I/O Completion Queues: 64 00:10:52.682 00:10:52.682 ZNS Specific Controller Data 00:10:52.682 ============================ 00:10:52.682 Zone Append Size Limit: 0 00:10:52.682 00:10:52.682 00:10:52.682 Active Namespaces 00:10:52.682 ================= 00:10:52.682 Namespace ID:1 00:10:52.682 Error Recovery Timeout: Unlimited 00:10:52.682 Command Set Identifier: NVM (00h) 00:10:52.682 Deallocate: Supported 00:10:52.682 Deallocated/Unwritten Error: Supported 00:10:52.682 Deallocated Read Value: All 0x00 00:10:52.682 Deallocate in Write Zeroes: Not Supported 00:10:52.682 Deallocated Guard Field: 0xFFFF 00:10:52.682 Flush: Supported 00:10:52.682 Reservation: Not Supported 00:10:52.682 Namespace Sharing Capabilities: Private 00:10:52.682 Size (in LBAs): 1310720 (5GiB) 00:10:52.682 Capacity (in LBAs): 1310720 (5GiB) 00:10:52.682 Utilization (in LBAs): 1310720 (5GiB) 00:10:52.682 Thin Provisioning: Not Supported 00:10:52.682 Per-NS Atomic Units: No 00:10:52.682 Maximum Single Source Range Length: 128 00:10:52.682 Maximum Copy Length: 128 00:10:52.682 Maximum Source Range Count: 128 00:10:52.682 NGUID/EUI64 Never Reused: No 00:10:52.682 Namespace Write Protected: No 00:10:52.682 Number of LBA Formats: 8 00:10:52.682 Current LBA Format: LBA Format #04 00:10:52.682 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:52.682 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:52.682 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:52.682 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:52.682 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:52.682 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:52.682 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:52.682 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:52.682 00:10:52.682 04:50:59 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:52.682 04:50:59 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' -i 0 00:10:52.942 ===================================================== 00:10:52.942 NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:52.942 ===================================================== 00:10:52.942 Controller Capabilities/Features 00:10:52.942 ================================ 00:10:52.942 Vendor ID: 1b36 00:10:52.942 Subsystem Vendor ID: 1af4 00:10:52.942 Serial Number: 12342 00:10:52.942 Model Number: QEMU NVMe Ctrl 00:10:52.942 Firmware Version: 8.0.0 00:10:52.942 Recommended Arb Burst: 6 00:10:52.942 IEEE OUI Identifier: 00 54 52 00:10:52.942 Multi-path I/O 00:10:52.942 May have multiple subsystem ports: No 00:10:52.942 May have multiple controllers: No 00:10:52.942 Associated with SR-IOV VF: No 00:10:52.942 Max Data Transfer Size: 524288 00:10:52.942 Max Number of Namespaces: 256 00:10:52.942 Max Number of I/O Queues: 64 00:10:52.942 NVMe Specification Version (VS): 1.4 00:10:52.942 NVMe Specification Version (Identify): 1.4 00:10:52.942 Maximum Queue Entries: 2048 00:10:52.942 Contiguous Queues Required: Yes 00:10:52.942 Arbitration Mechanisms Supported 00:10:52.942 Weighted Round Robin: Not Supported 00:10:52.942 Vendor Specific: Not Supported 00:10:52.942 Reset Timeout: 7500 ms 00:10:52.942 Doorbell Stride: 4 bytes 00:10:52.942 NVM Subsystem Reset: Not Supported 00:10:52.942 Command Sets Supported 00:10:52.942 NVM Command Set: Supported 00:10:52.942 Boot Partition: Not Supported 00:10:52.942 Memory Page Size Minimum: 4096 bytes 00:10:52.942 Memory Page Size Maximum: 65536 bytes 00:10:52.942 Persistent Memory Region: Not Supported 00:10:52.942 Optional Asynchronous Events Supported 00:10:52.942 Namespace Attribute Notices: Supported 00:10:52.942 Firmware Activation Notices: Not Supported 00:10:52.942 ANA Change Notices: Not Supported 00:10:52.943 PLE Aggregate Log Change Notices: Not Supported 00:10:52.943 LBA Status Info Alert Notices: Not Supported 00:10:52.943 EGE Aggregate Log Change Notices: Not Supported 00:10:52.943 Normal NVM Subsystem Shutdown event: Not Supported 00:10:52.943 Zone Descriptor Change Notices: Not Supported 00:10:52.943 Discovery Log Change Notices: Not Supported 00:10:52.943 Controller Attributes 00:10:52.943 128-bit Host Identifier: Not Supported 00:10:52.943 Non-Operational Permissive Mode: Not Supported 00:10:52.943 NVM Sets: Not Supported 00:10:52.943 Read Recovery Levels: Not Supported 00:10:52.943 Endurance Groups: Not Supported 00:10:52.943 Predictable Latency Mode: Not Supported 00:10:52.943 Traffic Based Keep ALive: Not Supported 00:10:52.943 Namespace Granularity: Not Supported 00:10:52.943 SQ Associations: Not Supported 00:10:52.943 UUID List: Not Supported 00:10:52.943 Multi-Domain Subsystem: Not Supported 00:10:52.943 Fixed Capacity Management: Not Supported 00:10:52.943 Variable Capacity Management: Not Supported 00:10:52.943 Delete Endurance Group: Not Supported 00:10:52.943 Delete NVM Set: Not Supported 00:10:52.943 Extended LBA Formats Supported: Supported 00:10:52.943 Flexible Data Placement Supported: Not Supported 00:10:52.943 00:10:52.943 Controller Memory Buffer Support 00:10:52.943 ================================ 00:10:52.943 Supported: No 00:10:52.943 00:10:52.943 Persistent Memory Region Support 00:10:52.943 ================================ 00:10:52.943 Supported: No 00:10:52.943 00:10:52.943 Admin Command Set Attributes 00:10:52.943 ============================ 00:10:52.943 Security Send/Receive: Not Supported 00:10:52.943 Format NVM: Supported 00:10:52.943 Firmware Activate/Download: Not Supported 00:10:52.943 Namespace Management: Supported 00:10:52.943 Device Self-Test: Not Supported 00:10:52.943 Directives: Supported 00:10:52.943 NVMe-MI: Not Supported 00:10:52.943 Virtualization Management: Not Supported 00:10:52.943 Doorbell Buffer Config: Supported 00:10:52.943 Get LBA Status Capability: Not Supported 00:10:52.943 Command & Feature Lockdown Capability: Not Supported 00:10:52.943 Abort Command Limit: 4 00:10:52.943 Async Event Request Limit: 4 00:10:52.943 Number of Firmware Slots: N/A 00:10:52.943 Firmware Slot 1 Read-Only: N/A 00:10:52.943 Firmware Activation Without Reset: N/A 00:10:52.943 Multiple Update Detection Support: N/A 00:10:52.943 Firmware Update Granularity: No Information Provided 00:10:52.943 Per-Namespace SMART Log: Yes 00:10:52.943 Asymmetric Namespace Access Log Page: Not Supported 00:10:52.943 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:10:52.943 Command Effects Log Page: Supported 00:10:52.943 Get Log Page Extended Data: Supported 00:10:52.943 Telemetry Log Pages: Not Supported 00:10:52.943 Persistent Event Log Pages: Not Supported 00:10:52.943 Supported Log Pages Log Page: May Support 00:10:52.943 Commands Supported & Effects Log Page: Not Supported 00:10:52.943 Feature Identifiers & Effects Log Page:May Support 00:10:52.943 NVMe-MI Commands & Effects Log Page: May Support 00:10:52.943 Data Area 4 for Telemetry Log: Not Supported 00:10:52.943 Error Log Page Entries Supported: 1 00:10:52.943 Keep Alive: Not Supported 00:10:52.943 00:10:52.943 NVM Command Set Attributes 00:10:52.943 ========================== 00:10:52.943 Submission Queue Entry Size 00:10:52.943 Max: 64 00:10:52.943 Min: 64 00:10:52.943 Completion Queue Entry Size 00:10:52.943 Max: 16 00:10:52.943 Min: 16 00:10:52.943 Number of Namespaces: 256 00:10:52.943 Compare Command: Supported 00:10:52.943 Write Uncorrectable Command: Not Supported 00:10:52.943 Dataset Management Command: Supported 00:10:52.943 Write Zeroes Command: Supported 00:10:52.943 Set Features Save Field: Supported 00:10:52.943 Reservations: Not Supported 00:10:52.943 Timestamp: Supported 00:10:52.943 Copy: Supported 00:10:52.943 Volatile Write Cache: Present 00:10:52.943 Atomic Write Unit (Normal): 1 00:10:52.943 Atomic Write Unit (PFail): 1 00:10:52.943 Atomic Compare & Write Unit: 1 00:10:52.943 Fused Compare & Write: Not Supported 00:10:52.943 Scatter-Gather List 00:10:52.943 SGL Command Set: Supported 00:10:52.943 SGL Keyed: Not Supported 00:10:52.943 SGL Bit Bucket Descriptor: Not Supported 00:10:52.943 SGL Metadata Pointer: Not Supported 00:10:52.943 Oversized SGL: Not Supported 00:10:52.943 SGL Metadata Address: Not Supported 00:10:52.943 SGL Offset: Not Supported 00:10:52.943 Transport SGL Data Block: Not Supported 00:10:52.943 Replay Protected Memory Block: Not Supported 00:10:52.943 00:10:52.943 Firmware Slot Information 00:10:52.943 ========================= 00:10:52.943 Active slot: 1 00:10:52.943 Slot 1 Firmware Revision: 1.0 00:10:52.943 00:10:52.943 00:10:52.943 Commands Supported and Effects 00:10:52.943 ============================== 00:10:52.943 Admin Commands 00:10:52.943 -------------- 00:10:52.943 Delete I/O Submission Queue (00h): Supported 00:10:52.943 Create I/O Submission Queue (01h): Supported 00:10:52.943 Get Log Page (02h): Supported 00:10:52.943 Delete I/O Completion Queue (04h): Supported 00:10:52.943 Create I/O Completion Queue (05h): Supported 00:10:52.943 Identify (06h): Supported 00:10:52.943 Abort (08h): Supported 00:10:52.943 Set Features (09h): Supported 00:10:52.943 Get Features (0Ah): Supported 00:10:52.943 Asynchronous Event Request (0Ch): Supported 00:10:52.943 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:52.943 Directive Send (19h): Supported 00:10:52.943 Directive Receive (1Ah): Supported 00:10:52.943 Virtualization Management (1Ch): Supported 00:10:52.943 Doorbell Buffer Config (7Ch): Supported 00:10:52.943 Format NVM (80h): Supported LBA-Change 00:10:52.943 I/O Commands 00:10:52.943 ------------ 00:10:52.943 Flush (00h): Supported LBA-Change 00:10:52.943 Write (01h): Supported LBA-Change 00:10:52.943 Read (02h): Supported 00:10:52.943 Compare (05h): Supported 00:10:52.943 Write Zeroes (08h): Supported LBA-Change 00:10:52.943 Dataset Management (09h): Supported LBA-Change 00:10:52.943 Unknown (0Ch): Supported 00:10:52.943 Unknown (12h): Supported 00:10:52.943 Copy (19h): Supported LBA-Change 00:10:52.943 Unknown (1Dh): Supported LBA-Change 00:10:52.943 00:10:52.943 Error Log 00:10:52.943 ========= 00:10:52.943 00:10:52.943 Arbitration 00:10:52.943 =========== 00:10:52.943 Arbitration Burst: no limit 00:10:52.943 00:10:52.943 Power Management 00:10:52.943 ================ 00:10:52.943 Number of Power States: 1 00:10:52.943 Current Power State: Power State #0 00:10:52.943 Power State #0: 00:10:52.943 Max Power: 25.00 W 00:10:52.943 Non-Operational State: Operational 00:10:52.943 Entry Latency: 16 microseconds 00:10:52.943 Exit Latency: 4 microseconds 00:10:52.943 Relative Read Throughput: 0 00:10:52.943 Relative Read Latency: 0 00:10:52.943 Relative Write Throughput: 0 00:10:52.943 Relative Write Latency: 0 00:10:52.943 Idle Power: Not Reported 00:10:52.943 Active Power: Not Reported 00:10:52.943 Non-Operational Permissive Mode: Not Supported 00:10:52.943 00:10:52.943 Health Information 00:10:52.943 ================== 00:10:52.943 Critical Warnings: 00:10:52.943 Available Spare Space: OK 00:10:52.943 Temperature: OK 00:10:52.943 Device Reliability: OK 00:10:52.943 Read Only: No 00:10:52.943 Volatile Memory Backup: OK 00:10:52.943 Current Temperature: 323 Kelvin (50 Celsius) 00:10:52.943 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:52.943 Available Spare: 0% 00:10:52.943 Available Spare Threshold: 0% 00:10:52.943 Life Percentage Used: 0% 00:10:52.943 Data Units Read: 3757 00:10:52.943 Data Units Written: 1735 00:10:52.943 Host Read Commands: 182257 00:10:52.943 Host Write Commands: 89515 00:10:52.943 Controller Busy Time: 0 minutes 00:10:52.943 Power Cycles: 0 00:10:52.943 Power On Hours: 0 hours 00:10:52.944 Unsafe Shutdowns: 0 00:10:52.944 Unrecoverable Media Errors: 0 00:10:52.944 Lifetime Error Log Entries: 0 00:10:52.944 Warning Temperature Time: 0 minutes 00:10:52.944 Critical Temperature Time: 0 minutes 00:10:52.944 00:10:52.944 Number of Queues 00:10:52.944 ================ 00:10:52.944 Number of I/O Submission Queues: 64 00:10:52.944 Number of I/O Completion Queues: 64 00:10:52.944 00:10:52.944 ZNS Specific Controller Data 00:10:52.944 ============================ 00:10:52.944 Zone Append Size Limit: 0 00:10:52.944 00:10:52.944 00:10:52.944 Active Namespaces 00:10:52.944 ================= 00:10:52.944 Namespace ID:1 00:10:52.944 Error Recovery Timeout: Unlimited 00:10:52.944 Command Set Identifier: NVM (00h) 00:10:52.944 Deallocate: Supported 00:10:52.944 Deallocated/Unwritten Error: Supported 00:10:52.944 Deallocated Read Value: All 0x00 00:10:52.944 Deallocate in Write Zeroes: Not Supported 00:10:52.944 Deallocated Guard Field: 0xFFFF 00:10:52.944 Flush: Supported 00:10:52.944 Reservation: Not Supported 00:10:52.944 Namespace Sharing Capabilities: Private 00:10:52.944 Size (in LBAs): 1048576 (4GiB) 00:10:52.944 Capacity (in LBAs): 1048576 (4GiB) 00:10:52.944 Utilization (in LBAs): 1048576 (4GiB) 00:10:52.944 Thin Provisioning: Not Supported 00:10:52.944 Per-NS Atomic Units: No 00:10:52.944 Maximum Single Source Range Length: 128 00:10:52.944 Maximum Copy Length: 128 00:10:52.944 Maximum Source Range Count: 128 00:10:52.944 NGUID/EUI64 Never Reused: No 00:10:52.944 Namespace Write Protected: No 00:10:52.944 Number of LBA Formats: 8 00:10:52.944 Current LBA Format: LBA Format #04 00:10:52.944 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:52.944 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:52.944 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:52.944 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:52.944 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:52.944 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:52.944 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:52.944 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:52.944 00:10:52.944 Namespace ID:2 00:10:52.944 Error Recovery Timeout: Unlimited 00:10:52.944 Command Set Identifier: NVM (00h) 00:10:52.944 Deallocate: Supported 00:10:52.944 Deallocated/Unwritten Error: Supported 00:10:52.944 Deallocated Read Value: All 0x00 00:10:52.944 Deallocate in Write Zeroes: Not Supported 00:10:52.944 Deallocated Guard Field: 0xFFFF 00:10:52.944 Flush: Supported 00:10:52.944 Reservation: Not Supported 00:10:52.944 Namespace Sharing Capabilities: Private 00:10:52.944 Size (in LBAs): 1048576 (4GiB) 00:10:52.944 Capacity (in LBAs): 1048576 (4GiB) 00:10:52.944 Utilization (in LBAs): 1048576 (4GiB) 00:10:52.944 Thin Provisioning: Not Supported 00:10:52.944 Per-NS Atomic Units: No 00:10:52.944 Maximum Single Source Range Length: 128 00:10:52.944 Maximum Copy Length: 128 00:10:52.944 Maximum Source Range Count: 128 00:10:52.944 NGUID/EUI64 Never Reused: No 00:10:52.944 Namespace Write Protected: No 00:10:52.944 Number of LBA Formats: 8 00:10:52.944 Current LBA Format: LBA Format #04 00:10:52.944 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:52.944 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:52.944 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:52.944 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:52.944 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:52.944 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:52.944 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:52.944 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:52.944 00:10:52.944 Namespace ID:3 00:10:52.944 Error Recovery Timeout: Unlimited 00:10:52.944 Command Set Identifier: NVM (00h) 00:10:52.944 Deallocate: Supported 00:10:52.944 Deallocated/Unwritten Error: Supported 00:10:52.944 Deallocated Read Value: All 0x00 00:10:52.944 Deallocate in Write Zeroes: Not Supported 00:10:52.944 Deallocated Guard Field: 0xFFFF 00:10:52.944 Flush: Supported 00:10:52.944 Reservation: Not Supported 00:10:52.944 Namespace Sharing Capabilities: Private 00:10:52.944 Size (in LBAs): 1048576 (4GiB) 00:10:52.944 Capacity (in LBAs): 1048576 (4GiB) 00:10:52.944 Utilization (in LBAs): 1048576 (4GiB) 00:10:52.944 Thin Provisioning: Not Supported 00:10:52.944 Per-NS Atomic Units: No 00:10:52.944 Maximum Single Source Range Length: 128 00:10:52.944 Maximum Copy Length: 128 00:10:52.944 Maximum Source Range Count: 128 00:10:52.944 NGUID/EUI64 Never Reused: No 00:10:52.944 Namespace Write Protected: No 00:10:52.944 Number of LBA Formats: 8 00:10:52.944 Current LBA Format: LBA Format #04 00:10:52.944 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:52.944 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:52.944 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:52.944 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:52.944 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:52.944 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:52.944 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:52.944 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:52.944 00:10:53.203 04:51:00 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:53.203 04:51:00 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' -i 0 00:10:53.462 ===================================================== 00:10:53.462 NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:53.462 ===================================================== 00:10:53.462 Controller Capabilities/Features 00:10:53.462 ================================ 00:10:53.462 Vendor ID: 1b36 00:10:53.462 Subsystem Vendor ID: 1af4 00:10:53.462 Serial Number: 12343 00:10:53.462 Model Number: QEMU NVMe Ctrl 00:10:53.462 Firmware Version: 8.0.0 00:10:53.462 Recommended Arb Burst: 6 00:10:53.462 IEEE OUI Identifier: 00 54 52 00:10:53.462 Multi-path I/O 00:10:53.462 May have multiple subsystem ports: No 00:10:53.462 May have multiple controllers: Yes 00:10:53.462 Associated with SR-IOV VF: No 00:10:53.462 Max Data Transfer Size: 524288 00:10:53.462 Max Number of Namespaces: 256 00:10:53.462 Max Number of I/O Queues: 64 00:10:53.462 NVMe Specification Version (VS): 1.4 00:10:53.462 NVMe Specification Version (Identify): 1.4 00:10:53.462 Maximum Queue Entries: 2048 00:10:53.462 Contiguous Queues Required: Yes 00:10:53.462 Arbitration Mechanisms Supported 00:10:53.462 Weighted Round Robin: Not Supported 00:10:53.462 Vendor Specific: Not Supported 00:10:53.462 Reset Timeout: 7500 ms 00:10:53.462 Doorbell Stride: 4 bytes 00:10:53.462 NVM Subsystem Reset: Not Supported 00:10:53.462 Command Sets Supported 00:10:53.462 NVM Command Set: Supported 00:10:53.462 Boot Partition: Not Supported 00:10:53.462 Memory Page Size Minimum: 4096 bytes 00:10:53.462 Memory Page Size Maximum: 65536 bytes 00:10:53.462 Persistent Memory Region: Not Supported 00:10:53.462 Optional Asynchronous Events Supported 00:10:53.462 Namespace Attribute Notices: Supported 00:10:53.462 Firmware Activation Notices: Not Supported 00:10:53.462 ANA Change Notices: Not Supported 00:10:53.462 PLE Aggregate Log Change Notices: Not Supported 00:10:53.462 LBA Status Info Alert Notices: Not Supported 00:10:53.462 EGE Aggregate Log Change Notices: Not Supported 00:10:53.462 Normal NVM Subsystem Shutdown event: Not Supported 00:10:53.462 Zone Descriptor Change Notices: Not Supported 00:10:53.462 Discovery Log Change Notices: Not Supported 00:10:53.462 Controller Attributes 00:10:53.462 128-bit Host Identifier: Not Supported 00:10:53.462 Non-Operational Permissive Mode: Not Supported 00:10:53.462 NVM Sets: Not Supported 00:10:53.462 Read Recovery Levels: Not Supported 00:10:53.462 Endurance Groups: Supported 00:10:53.462 Predictable Latency Mode: Not Supported 00:10:53.462 Traffic Based Keep ALive: Not Supported 00:10:53.462 Namespace Granularity: Not Supported 00:10:53.462 SQ Associations: Not Supported 00:10:53.462 UUID List: Not Supported 00:10:53.462 Multi-Domain Subsystem: Not Supported 00:10:53.462 Fixed Capacity Management: Not Supported 00:10:53.462 Variable Capacity Management: Not Supported 00:10:53.462 Delete Endurance Group: Not Supported 00:10:53.462 Delete NVM Set: Not Supported 00:10:53.462 Extended LBA Formats Supported: Supported 00:10:53.462 Flexible Data Placement Supported: Supported 00:10:53.462 00:10:53.462 Controller Memory Buffer Support 00:10:53.462 ================================ 00:10:53.462 Supported: No 00:10:53.462 00:10:53.462 Persistent Memory Region Support 00:10:53.462 ================================ 00:10:53.462 Supported: No 00:10:53.462 00:10:53.462 Admin Command Set Attributes 00:10:53.462 ============================ 00:10:53.462 Security Send/Receive: Not Supported 00:10:53.462 Format NVM: Supported 00:10:53.462 Firmware Activate/Download: Not Supported 00:10:53.462 Namespace Management: Supported 00:10:53.462 Device Self-Test: Not Supported 00:10:53.462 Directives: Supported 00:10:53.462 NVMe-MI: Not Supported 00:10:53.462 Virtualization Management: Not Supported 00:10:53.463 Doorbell Buffer Config: Supported 00:10:53.463 Get LBA Status Capability: Not Supported 00:10:53.463 Command & Feature Lockdown Capability: Not Supported 00:10:53.463 Abort Command Limit: 4 00:10:53.463 Async Event Request Limit: 4 00:10:53.463 Number of Firmware Slots: N/A 00:10:53.463 Firmware Slot 1 Read-Only: N/A 00:10:53.463 Firmware Activation Without Reset: N/A 00:10:53.463 Multiple Update Detection Support: N/A 00:10:53.463 Firmware Update Granularity: No Information Provided 00:10:53.463 Per-Namespace SMART Log: Yes 00:10:53.463 Asymmetric Namespace Access Log Page: Not Supported 00:10:53.463 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:10:53.463 Command Effects Log Page: Supported 00:10:53.463 Get Log Page Extended Data: Supported 00:10:53.463 Telemetry Log Pages: Not Supported 00:10:53.463 Persistent Event Log Pages: Not Supported 00:10:53.463 Supported Log Pages Log Page: May Support 00:10:53.463 Commands Supported & Effects Log Page: Not Supported 00:10:53.463 Feature Identifiers & Effects Log Page:May Support 00:10:53.463 NVMe-MI Commands & Effects Log Page: May Support 00:10:53.463 Data Area 4 for Telemetry Log: Not Supported 00:10:53.463 Error Log Page Entries Supported: 1 00:10:53.463 Keep Alive: Not Supported 00:10:53.463 00:10:53.463 NVM Command Set Attributes 00:10:53.463 ========================== 00:10:53.463 Submission Queue Entry Size 00:10:53.463 Max: 64 00:10:53.463 Min: 64 00:10:53.463 Completion Queue Entry Size 00:10:53.463 Max: 16 00:10:53.463 Min: 16 00:10:53.463 Number of Namespaces: 256 00:10:53.463 Compare Command: Supported 00:10:53.463 Write Uncorrectable Command: Not Supported 00:10:53.463 Dataset Management Command: Supported 00:10:53.463 Write Zeroes Command: Supported 00:10:53.463 Set Features Save Field: Supported 00:10:53.463 Reservations: Not Supported 00:10:53.463 Timestamp: Supported 00:10:53.463 Copy: Supported 00:10:53.463 Volatile Write Cache: Present 00:10:53.463 Atomic Write Unit (Normal): 1 00:10:53.463 Atomic Write Unit (PFail): 1 00:10:53.463 Atomic Compare & Write Unit: 1 00:10:53.463 Fused Compare & Write: Not Supported 00:10:53.463 Scatter-Gather List 00:10:53.463 SGL Command Set: Supported 00:10:53.463 SGL Keyed: Not Supported 00:10:53.463 SGL Bit Bucket Descriptor: Not Supported 00:10:53.463 SGL Metadata Pointer: Not Supported 00:10:53.463 Oversized SGL: Not Supported 00:10:53.463 SGL Metadata Address: Not Supported 00:10:53.463 SGL Offset: Not Supported 00:10:53.463 Transport SGL Data Block: Not Supported 00:10:53.463 Replay Protected Memory Block: Not Supported 00:10:53.463 00:10:53.463 Firmware Slot Information 00:10:53.463 ========================= 00:10:53.463 Active slot: 1 00:10:53.463 Slot 1 Firmware Revision: 1.0 00:10:53.463 00:10:53.463 00:10:53.463 Commands Supported and Effects 00:10:53.463 ============================== 00:10:53.463 Admin Commands 00:10:53.463 -------------- 00:10:53.463 Delete I/O Submission Queue (00h): Supported 00:10:53.463 Create I/O Submission Queue (01h): Supported 00:10:53.463 Get Log Page (02h): Supported 00:10:53.463 Delete I/O Completion Queue (04h): Supported 00:10:53.463 Create I/O Completion Queue (05h): Supported 00:10:53.463 Identify (06h): Supported 00:10:53.463 Abort (08h): Supported 00:10:53.463 Set Features (09h): Supported 00:10:53.463 Get Features (0Ah): Supported 00:10:53.463 Asynchronous Event Request (0Ch): Supported 00:10:53.463 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:53.463 Directive Send (19h): Supported 00:10:53.463 Directive Receive (1Ah): Supported 00:10:53.463 Virtualization Management (1Ch): Supported 00:10:53.463 Doorbell Buffer Config (7Ch): Supported 00:10:53.463 Format NVM (80h): Supported LBA-Change 00:10:53.463 I/O Commands 00:10:53.463 ------------ 00:10:53.463 Flush (00h): Supported LBA-Change 00:10:53.463 Write (01h): Supported LBA-Change 00:10:53.463 Read (02h): Supported 00:10:53.463 Compare (05h): Supported 00:10:53.463 Write Zeroes (08h): Supported LBA-Change 00:10:53.463 Dataset Management (09h): Supported LBA-Change 00:10:53.463 Unknown (0Ch): Supported 00:10:53.463 Unknown (12h): Supported 00:10:53.463 Copy (19h): Supported LBA-Change 00:10:53.463 Unknown (1Dh): Supported LBA-Change 00:10:53.463 00:10:53.463 Error Log 00:10:53.463 ========= 00:10:53.463 00:10:53.463 Arbitration 00:10:53.463 =========== 00:10:53.463 Arbitration Burst: no limit 00:10:53.463 00:10:53.463 Power Management 00:10:53.463 ================ 00:10:53.463 Number of Power States: 1 00:10:53.463 Current Power State: Power State #0 00:10:53.463 Power State #0: 00:10:53.463 Max Power: 25.00 W 00:10:53.463 Non-Operational State: Operational 00:10:53.463 Entry Latency: 16 microseconds 00:10:53.463 Exit Latency: 4 microseconds 00:10:53.463 Relative Read Throughput: 0 00:10:53.463 Relative Read Latency: 0 00:10:53.463 Relative Write Throughput: 0 00:10:53.463 Relative Write Latency: 0 00:10:53.463 Idle Power: Not Reported 00:10:53.463 Active Power: Not Reported 00:10:53.463 Non-Operational Permissive Mode: Not Supported 00:10:53.463 00:10:53.463 Health Information 00:10:53.463 ================== 00:10:53.463 Critical Warnings: 00:10:53.463 Available Spare Space: OK 00:10:53.463 Temperature: OK 00:10:53.463 Device Reliability: OK 00:10:53.463 Read Only: No 00:10:53.463 Volatile Memory Backup: OK 00:10:53.463 Current Temperature: 323 Kelvin (50 Celsius) 00:10:53.463 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:53.463 Available Spare: 0% 00:10:53.463 Available Spare Threshold: 0% 00:10:53.463 Life Percentage Used: 0% 00:10:53.463 Data Units Read: 1257 00:10:53.463 Data Units Written: 598 00:10:53.463 Host Read Commands: 60231 00:10:53.463 Host Write Commands: 30018 00:10:53.463 Controller Busy Time: 0 minutes 00:10:53.463 Power Cycles: 0 00:10:53.463 Power On Hours: 0 hours 00:10:53.463 Unsafe Shutdowns: 0 00:10:53.463 Unrecoverable Media Errors: 0 00:10:53.463 Lifetime Error Log Entries: 0 00:10:53.463 Warning Temperature Time: 0 minutes 00:10:53.463 Critical Temperature Time: 0 minutes 00:10:53.463 00:10:53.463 Number of Queues 00:10:53.463 ================ 00:10:53.463 Number of I/O Submission Queues: 64 00:10:53.463 Number of I/O Completion Queues: 64 00:10:53.463 00:10:53.463 ZNS Specific Controller Data 00:10:53.463 ============================ 00:10:53.463 Zone Append Size Limit: 0 00:10:53.463 00:10:53.463 00:10:53.463 Active Namespaces 00:10:53.463 ================= 00:10:53.463 Namespace ID:1 00:10:53.463 Error Recovery Timeout: Unlimited 00:10:53.463 Command Set Identifier: NVM (00h) 00:10:53.463 Deallocate: Supported 00:10:53.463 Deallocated/Unwritten Error: Supported 00:10:53.463 Deallocated Read Value: All 0x00 00:10:53.463 Deallocate in Write Zeroes: Not Supported 00:10:53.463 Deallocated Guard Field: 0xFFFF 00:10:53.463 Flush: Supported 00:10:53.463 Reservation: Not Supported 00:10:53.463 Namespace Sharing Capabilities: Multiple Controllers 00:10:53.463 Size (in LBAs): 262144 (1GiB) 00:10:53.463 Capacity (in LBAs): 262144 (1GiB) 00:10:53.463 Utilization (in LBAs): 262144 (1GiB) 00:10:53.463 Thin Provisioning: Not Supported 00:10:53.463 Per-NS Atomic Units: No 00:10:53.463 Maximum Single Source Range Length: 128 00:10:53.463 Maximum Copy Length: 128 00:10:53.463 Maximum Source Range Count: 128 00:10:53.463 NGUID/EUI64 Never Reused: No 00:10:53.463 Namespace Write Protected: No 00:10:53.463 Endurance group ID: 1 00:10:53.463 Number of LBA Formats: 8 00:10:53.463 Current LBA Format: LBA Format #04 00:10:53.463 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:53.463 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:53.463 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:53.463 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:53.463 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:53.463 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:53.463 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:53.463 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:53.463 00:10:53.463 Get Feature FDP: 00:10:53.463 ================ 00:10:53.463 Enabled: Yes 00:10:53.463 FDP configuration index: 0 00:10:53.463 00:10:53.463 FDP configurations log page 00:10:53.463 =========================== 00:10:53.463 Number of FDP configurations: 1 00:10:53.463 Version: 0 00:10:53.463 Size: 112 00:10:53.463 FDP Configuration Descriptor: 0 00:10:53.463 Descriptor Size: 96 00:10:53.463 Reclaim Group Identifier format: 2 00:10:53.463 FDP Volatile Write Cache: Not Present 00:10:53.463 FDP Configuration: Valid 00:10:53.463 Vendor Specific Size: 0 00:10:53.463 Number of Reclaim Groups: 2 00:10:53.463 Number of Recalim Unit Handles: 8 00:10:53.463 Max Placement Identifiers: 128 00:10:53.463 Number of Namespaces Suppprted: 256 00:10:53.463 Reclaim unit Nominal Size: 6000000 bytes 00:10:53.463 Estimated Reclaim Unit Time Limit: Not Reported 00:10:53.463 RUH Desc #000: RUH Type: Initially Isolated 00:10:53.463 RUH Desc #001: RUH Type: Initially Isolated 00:10:53.463 RUH Desc #002: RUH Type: Initially Isolated 00:10:53.464 RUH Desc #003: RUH Type: Initially Isolated 00:10:53.464 RUH Desc #004: RUH Type: Initially Isolated 00:10:53.464 RUH Desc #005: RUH Type: Initially Isolated 00:10:53.464 RUH Desc #006: RUH Type: Initially Isolated 00:10:53.464 RUH Desc #007: RUH Type: Initially Isolated 00:10:53.464 00:10:53.464 FDP reclaim unit handle usage log page 00:10:53.464 ====================================== 00:10:53.464 Number of Reclaim Unit Handles: 8 00:10:53.464 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:53.464 RUH Usage Desc #001: RUH Attributes: Unused 00:10:53.464 RUH Usage Desc #002: RUH Attributes: Unused 00:10:53.464 RUH Usage Desc #003: RUH Attributes: Unused 00:10:53.464 RUH Usage Desc #004: RUH Attributes: Unused 00:10:53.464 RUH Usage Desc #005: RUH Attributes: Unused 00:10:53.464 RUH Usage Desc #006: RUH Attributes: Unused 00:10:53.464 RUH Usage Desc #007: RUH Attributes: Unused 00:10:53.464 00:10:53.464 FDP statistics log page 00:10:53.464 ======================= 00:10:53.464 Host bytes with metadata written: 389713920 00:10:53.464 Media bytes with metadata written: 389758976 00:10:53.464 Media bytes erased: 0 00:10:53.464 00:10:53.464 FDP events log page 00:10:53.464 =================== 00:10:53.464 Number of FDP events: 0 00:10:53.464 00:10:53.464 00:10:53.464 real 0m1.572s 00:10:53.464 user 0m0.627s 00:10:53.464 sys 0m0.733s 00:10:53.464 04:51:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:53.464 04:51:00 -- common/autotest_common.sh@10 -- # set +x 00:10:53.464 ************************************ 00:10:53.464 END TEST nvme_identify 00:10:53.464 ************************************ 00:10:53.464 04:51:00 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:10:53.464 04:51:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:53.464 04:51:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:53.464 04:51:00 -- common/autotest_common.sh@10 -- # set +x 00:10:53.464 ************************************ 00:10:53.464 START TEST nvme_perf 00:10:53.464 ************************************ 00:10:53.464 04:51:00 -- common/autotest_common.sh@1104 -- # nvme_perf 00:10:53.464 04:51:00 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:10:54.841 Initializing NVMe Controllers 00:10:54.841 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:54.841 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:54.841 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:54.841 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:54.841 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:10:54.841 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:10:54.841 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:10:54.841 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:10:54.841 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:10:54.841 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:10:54.841 Initialization complete. Launching workers. 00:10:54.841 ======================================================== 00:10:54.841 Latency(us) 00:10:54.841 Device Information : IOPS MiB/s Average min max 00:10:54.841 PCIE (0000:00:06.0) NSID 1 from core 0: 14462.73 169.49 8843.16 6728.57 37338.84 00:10:54.841 PCIE (0000:00:07.0) NSID 1 from core 0: 14462.73 169.49 8830.08 6326.43 35318.32 00:10:54.841 PCIE (0000:00:09.0) NSID 1 from core 0: 14462.73 169.49 8814.19 6985.36 34840.15 00:10:54.841 PCIE (0000:00:08.0) NSID 1 from core 0: 14462.73 169.49 8798.35 6922.35 33015.71 00:10:54.841 PCIE (0000:00:08.0) NSID 2 from core 0: 14462.73 169.49 8782.44 7034.88 31244.76 00:10:54.841 PCIE (0000:00:08.0) NSID 3 from core 0: 14462.73 169.49 8765.56 6984.96 29390.28 00:10:54.841 ======================================================== 00:10:54.841 Total : 86776.36 1016.91 8805.63 6326.43 37338.84 00:10:54.841 00:10:54.841 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:10:54.841 ================================================================================= 00:10:54.841 1.00000% : 7119.593us 00:10:54.841 10.00000% : 7506.851us 00:10:54.841 25.00000% : 7983.476us 00:10:54.841 50.00000% : 8579.258us 00:10:54.841 75.00000% : 9175.040us 00:10:54.841 90.00000% : 9889.978us 00:10:54.841 95.00000% : 10426.182us 00:10:54.841 98.00000% : 10962.385us 00:10:54.841 99.00000% : 12153.949us 00:10:54.841 99.50000% : 34793.658us 00:10:54.841 99.90000% : 36938.473us 00:10:54.841 99.99000% : 37415.098us 00:10:54.841 99.99900% : 37415.098us 00:10:54.841 99.99990% : 37415.098us 00:10:54.841 99.99999% : 37415.098us 00:10:54.841 00:10:54.841 Summary latency data for PCIE (0000:00:07.0) NSID 1 from core 0: 00:10:54.841 ================================================================================= 00:10:54.841 1.00000% : 7268.538us 00:10:54.841 10.00000% : 7685.585us 00:10:54.841 25.00000% : 8043.055us 00:10:54.841 50.00000% : 8579.258us 00:10:54.841 75.00000% : 9115.462us 00:10:54.841 90.00000% : 9830.400us 00:10:54.841 95.00000% : 10307.025us 00:10:54.841 98.00000% : 10783.651us 00:10:54.841 99.00000% : 12392.262us 00:10:54.841 99.50000% : 32887.156us 00:10:54.841 99.90000% : 35031.971us 00:10:54.841 99.99000% : 35508.596us 00:10:54.841 99.99900% : 35508.596us 00:10:54.841 99.99990% : 35508.596us 00:10:54.841 99.99999% : 35508.596us 00:10:54.841 00:10:54.841 Summary latency data for PCIE (0000:00:09.0) NSID 1 from core 0: 00:10:54.841 ================================================================================= 00:10:54.841 1.00000% : 7268.538us 00:10:54.841 10.00000% : 7685.585us 00:10:54.841 25.00000% : 8043.055us 00:10:54.841 50.00000% : 8579.258us 00:10:54.841 75.00000% : 9115.462us 00:10:54.841 90.00000% : 9770.822us 00:10:54.841 95.00000% : 10247.447us 00:10:54.841 98.00000% : 10724.073us 00:10:54.841 99.00000% : 11677.324us 00:10:54.841 99.50000% : 32172.218us 00:10:54.841 99.90000% : 34317.033us 00:10:54.841 99.99000% : 35031.971us 00:10:54.841 99.99900% : 35031.971us 00:10:54.841 99.99990% : 35031.971us 00:10:54.841 99.99999% : 35031.971us 00:10:54.841 00:10:54.841 Summary latency data for PCIE (0000:00:08.0) NSID 1 from core 0: 00:10:54.841 ================================================================================= 00:10:54.841 1.00000% : 7298.327us 00:10:54.841 10.00000% : 7685.585us 00:10:54.841 25.00000% : 8043.055us 00:10:54.841 50.00000% : 8579.258us 00:10:54.841 75.00000% : 9115.462us 00:10:54.841 90.00000% : 9770.822us 00:10:54.841 95.00000% : 10307.025us 00:10:54.841 98.00000% : 10724.073us 00:10:54.841 99.00000% : 11498.589us 00:10:54.841 99.50000% : 30265.716us 00:10:54.841 99.90000% : 32648.844us 00:10:54.841 99.99000% : 33125.469us 00:10:54.841 99.99900% : 33125.469us 00:10:54.841 99.99990% : 33125.469us 00:10:54.841 99.99999% : 33125.469us 00:10:54.841 00:10:54.842 Summary latency data for PCIE (0000:00:08.0) NSID 2 from core 0: 00:10:54.842 ================================================================================= 00:10:54.842 1.00000% : 7298.327us 00:10:54.842 10.00000% : 7685.585us 00:10:54.842 25.00000% : 8043.055us 00:10:54.842 50.00000% : 8579.258us 00:10:54.842 75.00000% : 9115.462us 00:10:54.842 90.00000% : 9770.822us 00:10:54.842 95.00000% : 10307.025us 00:10:54.842 98.00000% : 10724.073us 00:10:54.842 99.00000% : 11439.011us 00:10:54.842 99.50000% : 28478.371us 00:10:54.842 99.90000% : 30742.342us 00:10:54.842 99.99000% : 31218.967us 00:10:54.842 99.99900% : 31457.280us 00:10:54.842 99.99990% : 31457.280us 00:10:54.842 99.99999% : 31457.280us 00:10:54.842 00:10:54.842 Summary latency data for PCIE (0000:00:08.0) NSID 3 from core 0: 00:10:54.842 ================================================================================= 00:10:54.842 1.00000% : 7298.327us 00:10:54.842 10.00000% : 7685.585us 00:10:54.842 25.00000% : 8043.055us 00:10:54.842 50.00000% : 8579.258us 00:10:54.842 75.00000% : 9115.462us 00:10:54.842 90.00000% : 9770.822us 00:10:54.842 95.00000% : 10307.025us 00:10:54.842 98.00000% : 10843.229us 00:10:54.842 99.00000% : 11617.745us 00:10:54.842 99.50000% : 26571.869us 00:10:54.842 99.90000% : 28954.996us 00:10:54.842 99.99000% : 29431.622us 00:10:54.842 99.99900% : 29431.622us 00:10:54.842 99.99990% : 29431.622us 00:10:54.842 99.99999% : 29431.622us 00:10:54.842 00:10:54.842 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:10:54.842 ============================================================================== 00:10:54.842 Range in us Cumulative IO count 00:10:54.842 6702.545 - 6732.335: 0.0069% ( 1) 00:10:54.842 6732.335 - 6762.124: 0.0207% ( 2) 00:10:54.842 6762.124 - 6791.913: 0.0277% ( 1) 00:10:54.842 6791.913 - 6821.702: 0.0415% ( 2) 00:10:54.842 6821.702 - 6851.491: 0.0691% ( 4) 00:10:54.842 6851.491 - 6881.280: 0.0899% ( 3) 00:10:54.842 6881.280 - 6911.069: 0.1106% ( 3) 00:10:54.842 6911.069 - 6940.858: 0.1728% ( 9) 00:10:54.842 6940.858 - 6970.647: 0.2558% ( 12) 00:10:54.842 6970.647 - 7000.436: 0.3180% ( 9) 00:10:54.842 7000.436 - 7030.225: 0.4287% ( 16) 00:10:54.842 7030.225 - 7060.015: 0.6153% ( 27) 00:10:54.842 7060.015 - 7089.804: 0.8573% ( 35) 00:10:54.842 7089.804 - 7119.593: 1.1684% ( 45) 00:10:54.842 7119.593 - 7149.382: 1.6247% ( 66) 00:10:54.842 7149.382 - 7179.171: 2.0603% ( 63) 00:10:54.842 7179.171 - 7208.960: 2.6756% ( 89) 00:10:54.842 7208.960 - 7238.749: 3.2218% ( 79) 00:10:54.842 7238.749 - 7268.538: 3.9616% ( 107) 00:10:54.842 7268.538 - 7298.327: 4.6460% ( 99) 00:10:54.842 7298.327 - 7328.116: 5.3789% ( 106) 00:10:54.842 7328.116 - 7357.905: 6.1117% ( 106) 00:10:54.842 7357.905 - 7387.695: 6.9068% ( 115) 00:10:54.842 7387.695 - 7417.484: 7.7226% ( 118) 00:10:54.842 7417.484 - 7447.273: 8.6905% ( 140) 00:10:54.842 7447.273 - 7477.062: 9.5548% ( 125) 00:10:54.842 7477.062 - 7506.851: 10.4535% ( 130) 00:10:54.842 7506.851 - 7536.640: 11.4215% ( 140) 00:10:54.842 7536.640 - 7566.429: 12.4654% ( 151) 00:10:54.842 7566.429 - 7596.218: 13.3366% ( 126) 00:10:54.842 7596.218 - 7626.007: 14.4013% ( 154) 00:10:54.842 7626.007 - 7685.585: 16.4339% ( 294) 00:10:54.842 7685.585 - 7745.164: 18.3905% ( 283) 00:10:54.842 7745.164 - 7804.742: 20.4784% ( 302) 00:10:54.842 7804.742 - 7864.320: 22.4972% ( 292) 00:10:54.842 7864.320 - 7923.898: 24.6059% ( 305) 00:10:54.842 7923.898 - 7983.476: 26.7008% ( 303) 00:10:54.842 7983.476 - 8043.055: 28.8924% ( 317) 00:10:54.842 8043.055 - 8102.633: 31.1463% ( 326) 00:10:54.842 8102.633 - 8162.211: 33.4693% ( 336) 00:10:54.842 8162.211 - 8221.789: 35.8822% ( 349) 00:10:54.842 8221.789 - 8281.367: 38.2467% ( 342) 00:10:54.842 8281.367 - 8340.945: 40.6250% ( 344) 00:10:54.842 8340.945 - 8400.524: 43.1278% ( 362) 00:10:54.842 8400.524 - 8460.102: 45.6444% ( 364) 00:10:54.842 8460.102 - 8519.680: 48.0642% ( 350) 00:10:54.842 8519.680 - 8579.258: 50.5185% ( 355) 00:10:54.842 8579.258 - 8638.836: 53.0973% ( 373) 00:10:54.842 8638.836 - 8698.415: 55.5932% ( 361) 00:10:54.842 8698.415 - 8757.993: 58.1444% ( 369) 00:10:54.842 8757.993 - 8817.571: 60.6195% ( 358) 00:10:54.842 8817.571 - 8877.149: 63.1706% ( 369) 00:10:54.842 8877.149 - 8936.727: 65.6665% ( 361) 00:10:54.842 8936.727 - 8996.305: 68.2038% ( 367) 00:10:54.842 8996.305 - 9055.884: 70.6513% ( 354) 00:10:54.842 9055.884 - 9115.462: 73.0296% ( 344) 00:10:54.842 9115.462 - 9175.040: 75.4287% ( 347) 00:10:54.842 9175.040 - 9234.618: 77.7378% ( 334) 00:10:54.842 9234.618 - 9294.196: 79.8119% ( 300) 00:10:54.842 9294.196 - 9353.775: 81.6095% ( 260) 00:10:54.842 9353.775 - 9413.353: 83.1374% ( 221) 00:10:54.842 9413.353 - 9472.931: 84.6032% ( 212) 00:10:54.842 9472.931 - 9532.509: 85.8131% ( 175) 00:10:54.842 9532.509 - 9592.087: 86.9331% ( 162) 00:10:54.842 9592.087 - 9651.665: 87.8319% ( 130) 00:10:54.842 9651.665 - 9711.244: 88.5993% ( 111) 00:10:54.842 9711.244 - 9770.822: 89.2976% ( 101) 00:10:54.842 9770.822 - 9830.400: 89.9060% ( 88) 00:10:54.842 9830.400 - 9889.978: 90.4452% ( 78) 00:10:54.842 9889.978 - 9949.556: 90.9914% ( 79) 00:10:54.842 9949.556 - 10009.135: 91.5100% ( 75) 00:10:54.842 10009.135 - 10068.713: 92.0976% ( 85) 00:10:54.842 10068.713 - 10128.291: 92.5747% ( 69) 00:10:54.842 10128.291 - 10187.869: 93.0725% ( 72) 00:10:54.842 10187.869 - 10247.447: 93.5702% ( 72) 00:10:54.842 10247.447 - 10307.025: 94.1095% ( 78) 00:10:54.842 10307.025 - 10366.604: 94.5589% ( 65) 00:10:54.842 10366.604 - 10426.182: 95.0567% ( 72) 00:10:54.842 10426.182 - 10485.760: 95.5130% ( 66) 00:10:54.842 10485.760 - 10545.338: 95.9970% ( 70) 00:10:54.842 10545.338 - 10604.916: 96.3980% ( 58) 00:10:54.842 10604.916 - 10664.495: 96.7851% ( 56) 00:10:54.842 10664.495 - 10724.073: 97.1377% ( 51) 00:10:54.842 10724.073 - 10783.651: 97.4281% ( 42) 00:10:54.842 10783.651 - 10843.229: 97.6908% ( 38) 00:10:54.842 10843.229 - 10902.807: 97.8567% ( 24) 00:10:54.842 10902.807 - 10962.385: 98.0296% ( 25) 00:10:54.842 10962.385 - 11021.964: 98.1679% ( 20) 00:10:54.842 11021.964 - 11081.542: 98.2785% ( 16) 00:10:54.842 11081.542 - 11141.120: 98.3753% ( 14) 00:10:54.842 11141.120 - 11200.698: 98.4721% ( 14) 00:10:54.842 11200.698 - 11260.276: 98.5412% ( 10) 00:10:54.842 11260.276 - 11319.855: 98.5896% ( 7) 00:10:54.842 11319.855 - 11379.433: 98.6311% ( 6) 00:10:54.842 11379.433 - 11439.011: 98.6864% ( 8) 00:10:54.842 11439.011 - 11498.589: 98.7140% ( 4) 00:10:54.842 11498.589 - 11558.167: 98.7624% ( 7) 00:10:54.842 11558.167 - 11617.745: 98.7901% ( 4) 00:10:54.842 11617.745 - 11677.324: 98.8108% ( 3) 00:10:54.842 11677.324 - 11736.902: 98.8523% ( 6) 00:10:54.842 11736.902 - 11796.480: 98.8869% ( 5) 00:10:54.842 11796.480 - 11856.058: 98.9215% ( 5) 00:10:54.842 11856.058 - 11915.636: 98.9284% ( 1) 00:10:54.842 11915.636 - 11975.215: 98.9560% ( 4) 00:10:54.842 11975.215 - 12034.793: 98.9629% ( 1) 00:10:54.842 12034.793 - 12094.371: 98.9768% ( 2) 00:10:54.842 12094.371 - 12153.949: 99.0044% ( 4) 00:10:54.842 12213.527 - 12273.105: 99.0321% ( 4) 00:10:54.842 12273.105 - 12332.684: 99.0459% ( 2) 00:10:54.842 12332.684 - 12392.262: 99.0597% ( 2) 00:10:54.842 12392.262 - 12451.840: 99.0805% ( 3) 00:10:54.842 12451.840 - 12511.418: 99.0874% ( 1) 00:10:54.842 12511.418 - 12570.996: 99.1150% ( 4) 00:10:54.842 32410.531 - 32648.844: 99.1289% ( 2) 00:10:54.842 32648.844 - 32887.156: 99.1704% ( 6) 00:10:54.842 32887.156 - 33125.469: 99.2118% ( 6) 00:10:54.842 33125.469 - 33363.782: 99.2533% ( 6) 00:10:54.842 33363.782 - 33602.095: 99.2879% ( 5) 00:10:54.842 33602.095 - 33840.407: 99.3363% ( 7) 00:10:54.842 33840.407 - 34078.720: 99.3709% ( 5) 00:10:54.842 34078.720 - 34317.033: 99.4123% ( 6) 00:10:54.842 34317.033 - 34555.345: 99.4607% ( 7) 00:10:54.842 34555.345 - 34793.658: 99.5091% ( 7) 00:10:54.842 34793.658 - 35031.971: 99.5575% ( 7) 00:10:54.842 35031.971 - 35270.284: 99.5852% ( 4) 00:10:54.842 35270.284 - 35508.596: 99.6336% ( 7) 00:10:54.842 35508.596 - 35746.909: 99.6751% ( 6) 00:10:54.842 35746.909 - 35985.222: 99.7304% ( 8) 00:10:54.842 35985.222 - 36223.535: 99.7718% ( 6) 00:10:54.842 36223.535 - 36461.847: 99.8202% ( 7) 00:10:54.842 36461.847 - 36700.160: 99.8686% ( 7) 00:10:54.842 36700.160 - 36938.473: 99.9170% ( 7) 00:10:54.842 36938.473 - 37176.785: 99.9723% ( 8) 00:10:54.842 37176.785 - 37415.098: 100.0000% ( 4) 00:10:54.842 00:10:54.842 Latency histogram for PCIE (0000:00:07.0) NSID 1 from core 0: 00:10:54.842 ============================================================================== 00:10:54.842 Range in us Cumulative IO count 00:10:54.842 6315.287 - 6345.076: 0.0069% ( 1) 00:10:54.842 6345.076 - 6374.865: 0.0138% ( 1) 00:10:54.842 6374.865 - 6404.655: 0.0277% ( 2) 00:10:54.842 6404.655 - 6434.444: 0.0346% ( 1) 00:10:54.842 6434.444 - 6464.233: 0.0484% ( 2) 00:10:54.842 6464.233 - 6494.022: 0.0622% ( 2) 00:10:54.842 6494.022 - 6523.811: 0.0761% ( 2) 00:10:54.842 6523.811 - 6553.600: 0.0899% ( 2) 00:10:54.842 6553.600 - 6583.389: 0.0968% ( 1) 00:10:54.842 6583.389 - 6613.178: 0.1106% ( 2) 00:10:54.842 6613.178 - 6642.967: 0.1244% ( 2) 00:10:54.842 6642.967 - 6672.756: 0.1383% ( 2) 00:10:54.842 6672.756 - 6702.545: 0.1452% ( 1) 00:10:54.842 6702.545 - 6732.335: 0.1590% ( 2) 00:10:54.842 6732.335 - 6762.124: 0.1659% ( 1) 00:10:54.842 6762.124 - 6791.913: 0.1798% ( 2) 00:10:54.842 6791.913 - 6821.702: 0.1936% ( 2) 00:10:54.842 6911.069 - 6940.858: 0.2005% ( 1) 00:10:54.842 6940.858 - 6970.647: 0.2143% ( 2) 00:10:54.842 6970.647 - 7000.436: 0.2282% ( 2) 00:10:54.842 7000.436 - 7030.225: 0.2420% ( 2) 00:10:54.842 7030.225 - 7060.015: 0.2696% ( 4) 00:10:54.843 7060.015 - 7089.804: 0.3042% ( 5) 00:10:54.843 7089.804 - 7119.593: 0.3319% ( 4) 00:10:54.843 7119.593 - 7149.382: 0.3803% ( 7) 00:10:54.843 7149.382 - 7179.171: 0.4770% ( 14) 00:10:54.843 7179.171 - 7208.960: 0.6153% ( 20) 00:10:54.843 7208.960 - 7238.749: 0.8020% ( 27) 00:10:54.843 7238.749 - 7268.538: 1.0578% ( 37) 00:10:54.843 7268.538 - 7298.327: 1.4173% ( 52) 00:10:54.843 7298.327 - 7328.116: 1.8252% ( 59) 00:10:54.843 7328.116 - 7357.905: 2.3023% ( 69) 00:10:54.843 7357.905 - 7387.695: 2.8761% ( 83) 00:10:54.843 7387.695 - 7417.484: 3.6090% ( 106) 00:10:54.843 7417.484 - 7447.273: 4.3487% ( 107) 00:10:54.843 7447.273 - 7477.062: 5.1784% ( 120) 00:10:54.843 7477.062 - 7506.851: 5.9942% ( 118) 00:10:54.843 7506.851 - 7536.640: 6.9414% ( 137) 00:10:54.843 7536.640 - 7566.429: 7.8471% ( 131) 00:10:54.843 7566.429 - 7596.218: 8.8288% ( 142) 00:10:54.843 7596.218 - 7626.007: 9.8590% ( 149) 00:10:54.843 7626.007 - 7685.585: 12.0506% ( 317) 00:10:54.843 7685.585 - 7745.164: 14.2976% ( 325) 00:10:54.843 7745.164 - 7804.742: 16.5998% ( 333) 00:10:54.843 7804.742 - 7864.320: 19.0611% ( 356) 00:10:54.843 7864.320 - 7923.898: 21.4947% ( 352) 00:10:54.843 7923.898 - 7983.476: 23.9560% ( 356) 00:10:54.843 7983.476 - 8043.055: 26.4864% ( 366) 00:10:54.843 8043.055 - 8102.633: 29.0238% ( 367) 00:10:54.843 8102.633 - 8162.211: 31.6095% ( 374) 00:10:54.843 8162.211 - 8221.789: 34.1952% ( 374) 00:10:54.843 8221.789 - 8281.367: 36.8155% ( 379) 00:10:54.843 8281.367 - 8340.945: 39.5465% ( 395) 00:10:54.843 8340.945 - 8400.524: 42.3050% ( 399) 00:10:54.843 8400.524 - 8460.102: 45.2295% ( 423) 00:10:54.843 8460.102 - 8519.680: 48.1402% ( 421) 00:10:54.843 8519.680 - 8579.258: 51.1062% ( 429) 00:10:54.843 8579.258 - 8638.836: 53.9754% ( 415) 00:10:54.843 8638.836 - 8698.415: 57.0312% ( 442) 00:10:54.843 8698.415 - 8757.993: 60.0041% ( 430) 00:10:54.843 8757.993 - 8817.571: 62.9909% ( 432) 00:10:54.843 8817.571 - 8877.149: 66.0329% ( 440) 00:10:54.843 8877.149 - 8936.727: 68.9782% ( 426) 00:10:54.843 8936.727 - 8996.305: 71.8612% ( 417) 00:10:54.843 8996.305 - 9055.884: 74.5368% ( 387) 00:10:54.843 9055.884 - 9115.462: 76.9981% ( 356) 00:10:54.843 9115.462 - 9175.040: 79.0998% ( 304) 00:10:54.843 9175.040 - 9234.618: 80.8490% ( 253) 00:10:54.843 9234.618 - 9294.196: 82.4115% ( 226) 00:10:54.843 9294.196 - 9353.775: 83.8288% ( 205) 00:10:54.843 9353.775 - 9413.353: 85.0249% ( 173) 00:10:54.843 9413.353 - 9472.931: 86.0412% ( 147) 00:10:54.843 9472.931 - 9532.509: 86.9054% ( 125) 00:10:54.843 9532.509 - 9592.087: 87.6314% ( 105) 00:10:54.843 9592.087 - 9651.665: 88.2536% ( 90) 00:10:54.843 9651.665 - 9711.244: 88.9311% ( 98) 00:10:54.843 9711.244 - 9770.822: 89.5465% ( 89) 00:10:54.843 9770.822 - 9830.400: 90.1963% ( 94) 00:10:54.843 9830.400 - 9889.978: 90.8117% ( 89) 00:10:54.843 9889.978 - 9949.556: 91.4685% ( 95) 00:10:54.843 9949.556 - 10009.135: 92.0976% ( 91) 00:10:54.843 10009.135 - 10068.713: 92.7475% ( 94) 00:10:54.843 10068.713 - 10128.291: 93.3767% ( 91) 00:10:54.843 10128.291 - 10187.869: 93.9920% ( 89) 00:10:54.843 10187.869 - 10247.447: 94.6073% ( 89) 00:10:54.843 10247.447 - 10307.025: 95.1673% ( 81) 00:10:54.843 10307.025 - 10366.604: 95.7204% ( 80) 00:10:54.843 10366.604 - 10426.182: 96.2389% ( 75) 00:10:54.843 10426.182 - 10485.760: 96.7367% ( 72) 00:10:54.843 10485.760 - 10545.338: 97.1170% ( 55) 00:10:54.843 10545.338 - 10604.916: 97.4696% ( 51) 00:10:54.843 10604.916 - 10664.495: 97.6839% ( 31) 00:10:54.843 10664.495 - 10724.073: 97.8913% ( 30) 00:10:54.843 10724.073 - 10783.651: 98.0572% ( 24) 00:10:54.843 10783.651 - 10843.229: 98.1817% ( 18) 00:10:54.843 10843.229 - 10902.807: 98.2854% ( 15) 00:10:54.843 10902.807 - 10962.385: 98.3407% ( 8) 00:10:54.843 10962.385 - 11021.964: 98.3753% ( 5) 00:10:54.843 11021.964 - 11081.542: 98.4098% ( 5) 00:10:54.843 11081.542 - 11141.120: 98.4582% ( 7) 00:10:54.843 11141.120 - 11200.698: 98.4997% ( 6) 00:10:54.843 11200.698 - 11260.276: 98.5412% ( 6) 00:10:54.843 11260.276 - 11319.855: 98.5896% ( 7) 00:10:54.843 11319.855 - 11379.433: 98.6311% ( 6) 00:10:54.843 11379.433 - 11439.011: 98.6795% ( 7) 00:10:54.843 11439.011 - 11498.589: 98.7279% ( 7) 00:10:54.843 11498.589 - 11558.167: 98.7486% ( 3) 00:10:54.843 11558.167 - 11617.745: 98.7694% ( 3) 00:10:54.843 11617.745 - 11677.324: 98.7901% ( 3) 00:10:54.843 11677.324 - 11736.902: 98.8039% ( 2) 00:10:54.843 11736.902 - 11796.480: 98.8178% ( 2) 00:10:54.843 11796.480 - 11856.058: 98.8316% ( 2) 00:10:54.843 11856.058 - 11915.636: 98.8523% ( 3) 00:10:54.843 11915.636 - 11975.215: 98.8731% ( 3) 00:10:54.843 11975.215 - 12034.793: 98.8938% ( 3) 00:10:54.843 12034.793 - 12094.371: 98.9076% ( 2) 00:10:54.843 12094.371 - 12153.949: 98.9284% ( 3) 00:10:54.843 12153.949 - 12213.527: 98.9491% ( 3) 00:10:54.843 12213.527 - 12273.105: 98.9629% ( 2) 00:10:54.843 12273.105 - 12332.684: 98.9837% ( 3) 00:10:54.843 12332.684 - 12392.262: 99.0044% ( 3) 00:10:54.843 12392.262 - 12451.840: 99.0252% ( 3) 00:10:54.843 12451.840 - 12511.418: 99.0390% ( 2) 00:10:54.843 12511.418 - 12570.996: 99.0597% ( 3) 00:10:54.843 12570.996 - 12630.575: 99.0805% ( 3) 00:10:54.843 12630.575 - 12690.153: 99.1012% ( 3) 00:10:54.843 12690.153 - 12749.731: 99.1150% ( 2) 00:10:54.843 30504.029 - 30742.342: 99.1220% ( 1) 00:10:54.843 30742.342 - 30980.655: 99.1634% ( 6) 00:10:54.843 30980.655 - 31218.967: 99.2118% ( 7) 00:10:54.843 31218.967 - 31457.280: 99.2533% ( 6) 00:10:54.843 31457.280 - 31695.593: 99.3017% ( 7) 00:10:54.843 31695.593 - 31933.905: 99.3432% ( 6) 00:10:54.843 31933.905 - 32172.218: 99.3916% ( 7) 00:10:54.843 32172.218 - 32410.531: 99.4262% ( 5) 00:10:54.843 32410.531 - 32648.844: 99.4746% ( 7) 00:10:54.843 32648.844 - 32887.156: 99.5230% ( 7) 00:10:54.843 32887.156 - 33125.469: 99.5713% ( 7) 00:10:54.843 33125.469 - 33363.782: 99.6128% ( 6) 00:10:54.843 33363.782 - 33602.095: 99.6543% ( 6) 00:10:54.843 33602.095 - 33840.407: 99.7027% ( 7) 00:10:54.843 33840.407 - 34078.720: 99.7442% ( 6) 00:10:54.843 34078.720 - 34317.033: 99.7926% ( 7) 00:10:54.843 34317.033 - 34555.345: 99.8410% ( 7) 00:10:54.843 34555.345 - 34793.658: 99.8894% ( 7) 00:10:54.843 34793.658 - 35031.971: 99.9378% ( 7) 00:10:54.843 35031.971 - 35270.284: 99.9862% ( 7) 00:10:54.843 35270.284 - 35508.596: 100.0000% ( 2) 00:10:54.843 00:10:54.843 Latency histogram for PCIE (0000:00:09.0) NSID 1 from core 0: 00:10:54.843 ============================================================================== 00:10:54.843 Range in us Cumulative IO count 00:10:54.843 6970.647 - 7000.436: 0.0138% ( 2) 00:10:54.843 7000.436 - 7030.225: 0.0415% ( 4) 00:10:54.843 7030.225 - 7060.015: 0.0899% ( 7) 00:10:54.843 7060.015 - 7089.804: 0.1314% ( 6) 00:10:54.843 7089.804 - 7119.593: 0.2005% ( 10) 00:10:54.843 7119.593 - 7149.382: 0.3111% ( 16) 00:10:54.843 7149.382 - 7179.171: 0.4356% ( 18) 00:10:54.843 7179.171 - 7208.960: 0.6222% ( 27) 00:10:54.843 7208.960 - 7238.749: 0.8711% ( 36) 00:10:54.843 7238.749 - 7268.538: 1.1546% ( 41) 00:10:54.843 7268.538 - 7298.327: 1.4864% ( 48) 00:10:54.843 7298.327 - 7328.116: 1.9151% ( 62) 00:10:54.843 7328.116 - 7357.905: 2.4544% ( 78) 00:10:54.843 7357.905 - 7387.695: 3.1043% ( 94) 00:10:54.843 7387.695 - 7417.484: 3.7680% ( 96) 00:10:54.843 7417.484 - 7447.273: 4.4663% ( 101) 00:10:54.843 7447.273 - 7477.062: 5.2406% ( 112) 00:10:54.843 7477.062 - 7506.851: 6.0772% ( 121) 00:10:54.843 7506.851 - 7536.640: 6.9552% ( 127) 00:10:54.843 7536.640 - 7566.429: 7.8678% ( 132) 00:10:54.843 7566.429 - 7596.218: 8.8081% ( 136) 00:10:54.843 7596.218 - 7626.007: 9.7898% ( 142) 00:10:54.843 7626.007 - 7685.585: 11.9746% ( 316) 00:10:54.843 7685.585 - 7745.164: 14.2976% ( 336) 00:10:54.843 7745.164 - 7804.742: 16.6966% ( 347) 00:10:54.843 7804.742 - 7864.320: 19.1303% ( 352) 00:10:54.843 7864.320 - 7923.898: 21.5985% ( 357) 00:10:54.843 7923.898 - 7983.476: 24.0805% ( 359) 00:10:54.843 7983.476 - 8043.055: 26.5763% ( 361) 00:10:54.843 8043.055 - 8102.633: 29.1759% ( 376) 00:10:54.843 8102.633 - 8162.211: 31.7962% ( 379) 00:10:54.843 8162.211 - 8221.789: 34.3266% ( 366) 00:10:54.843 8221.789 - 8281.367: 37.0091% ( 388) 00:10:54.843 8281.367 - 8340.945: 39.7400% ( 395) 00:10:54.843 8340.945 - 8400.524: 42.4018% ( 385) 00:10:54.843 8400.524 - 8460.102: 45.2641% ( 414) 00:10:54.843 8460.102 - 8519.680: 48.1886% ( 423) 00:10:54.843 8519.680 - 8579.258: 51.0993% ( 421) 00:10:54.843 8579.258 - 8638.836: 54.1206% ( 437) 00:10:54.843 8638.836 - 8698.415: 57.1073% ( 432) 00:10:54.843 8698.415 - 8757.993: 60.0249% ( 422) 00:10:54.843 8757.993 - 8817.571: 62.9287% ( 420) 00:10:54.843 8817.571 - 8877.149: 65.8670% ( 425) 00:10:54.843 8877.149 - 8936.727: 68.7293% ( 414) 00:10:54.843 8936.727 - 8996.305: 71.5639% ( 410) 00:10:54.843 8996.305 - 9055.884: 74.2948% ( 395) 00:10:54.843 9055.884 - 9115.462: 76.8460% ( 369) 00:10:54.843 9115.462 - 9175.040: 79.0100% ( 313) 00:10:54.843 9175.040 - 9234.618: 80.9112% ( 275) 00:10:54.843 9234.618 - 9294.196: 82.6742% ( 255) 00:10:54.843 9294.196 - 9353.775: 84.1952% ( 220) 00:10:54.843 9353.775 - 9413.353: 85.3637% ( 169) 00:10:54.843 9413.353 - 9472.931: 86.5252% ( 168) 00:10:54.843 9472.931 - 9532.509: 87.4516% ( 134) 00:10:54.843 9532.509 - 9592.087: 88.2329% ( 113) 00:10:54.843 9592.087 - 9651.665: 88.9104% ( 98) 00:10:54.843 9651.665 - 9711.244: 89.6225% ( 103) 00:10:54.843 9711.244 - 9770.822: 90.2240% ( 87) 00:10:54.843 9770.822 - 9830.400: 90.8601% ( 92) 00:10:54.843 9830.400 - 9889.978: 91.5030% ( 93) 00:10:54.843 9889.978 - 9949.556: 92.0631% ( 81) 00:10:54.843 9949.556 - 10009.135: 92.6853% ( 90) 00:10:54.843 10009.135 - 10068.713: 93.2730% ( 85) 00:10:54.844 10068.713 - 10128.291: 93.8883% ( 89) 00:10:54.844 10128.291 - 10187.869: 94.5036% ( 89) 00:10:54.844 10187.869 - 10247.447: 95.0083% ( 73) 00:10:54.844 10247.447 - 10307.025: 95.5545% ( 79) 00:10:54.844 10307.025 - 10366.604: 96.0730% ( 75) 00:10:54.844 10366.604 - 10426.182: 96.5501% ( 69) 00:10:54.844 10426.182 - 10485.760: 96.9303% ( 55) 00:10:54.844 10485.760 - 10545.338: 97.3175% ( 56) 00:10:54.844 10545.338 - 10604.916: 97.6562% ( 49) 00:10:54.844 10604.916 - 10664.495: 97.9328% ( 40) 00:10:54.844 10664.495 - 10724.073: 98.1264% ( 28) 00:10:54.844 10724.073 - 10783.651: 98.2647% ( 20) 00:10:54.844 10783.651 - 10843.229: 98.3891% ( 18) 00:10:54.844 10843.229 - 10902.807: 98.4790% ( 13) 00:10:54.844 10902.807 - 10962.385: 98.5412% ( 9) 00:10:54.844 10962.385 - 11021.964: 98.5896% ( 7) 00:10:54.844 11021.964 - 11081.542: 98.6242% ( 5) 00:10:54.844 11081.542 - 11141.120: 98.6657% ( 6) 00:10:54.844 11141.120 - 11200.698: 98.7071% ( 6) 00:10:54.844 11200.698 - 11260.276: 98.7555% ( 7) 00:10:54.844 11260.276 - 11319.855: 98.7901% ( 5) 00:10:54.844 11319.855 - 11379.433: 98.8316% ( 6) 00:10:54.844 11379.433 - 11439.011: 98.8731% ( 6) 00:10:54.844 11439.011 - 11498.589: 98.9145% ( 6) 00:10:54.844 11498.589 - 11558.167: 98.9491% ( 5) 00:10:54.844 11558.167 - 11617.745: 98.9837% ( 5) 00:10:54.844 11617.745 - 11677.324: 99.0113% ( 4) 00:10:54.844 11677.324 - 11736.902: 99.0321% ( 3) 00:10:54.844 11736.902 - 11796.480: 99.0459% ( 2) 00:10:54.844 11796.480 - 11856.058: 99.0666% ( 3) 00:10:54.844 11856.058 - 11915.636: 99.0874% ( 3) 00:10:54.844 11915.636 - 11975.215: 99.1012% ( 2) 00:10:54.844 11975.215 - 12034.793: 99.1150% ( 2) 00:10:54.844 29789.091 - 29908.247: 99.1358% ( 3) 00:10:54.844 29908.247 - 30027.404: 99.1565% ( 3) 00:10:54.844 30027.404 - 30146.560: 99.1704% ( 2) 00:10:54.844 30146.560 - 30265.716: 99.1911% ( 3) 00:10:54.844 30265.716 - 30384.873: 99.2049% ( 2) 00:10:54.844 30384.873 - 30504.029: 99.2257% ( 3) 00:10:54.844 30504.029 - 30742.342: 99.2671% ( 6) 00:10:54.844 30742.342 - 30980.655: 99.3155% ( 7) 00:10:54.844 30980.655 - 31218.967: 99.3570% ( 6) 00:10:54.844 31218.967 - 31457.280: 99.3985% ( 6) 00:10:54.844 31457.280 - 31695.593: 99.4400% ( 6) 00:10:54.844 31695.593 - 31933.905: 99.4815% ( 6) 00:10:54.844 31933.905 - 32172.218: 99.5230% ( 6) 00:10:54.844 32172.218 - 32410.531: 99.5644% ( 6) 00:10:54.844 32410.531 - 32648.844: 99.6059% ( 6) 00:10:54.844 32648.844 - 32887.156: 99.6405% ( 5) 00:10:54.844 32887.156 - 33125.469: 99.6889% ( 7) 00:10:54.844 33125.469 - 33363.782: 99.7304% ( 6) 00:10:54.844 33363.782 - 33602.095: 99.7718% ( 6) 00:10:54.844 33602.095 - 33840.407: 99.8133% ( 6) 00:10:54.844 33840.407 - 34078.720: 99.8617% ( 7) 00:10:54.844 34078.720 - 34317.033: 99.9032% ( 6) 00:10:54.844 34317.033 - 34555.345: 99.9447% ( 6) 00:10:54.844 34555.345 - 34793.658: 99.9862% ( 6) 00:10:54.844 34793.658 - 35031.971: 100.0000% ( 2) 00:10:54.844 00:10:54.844 Latency histogram for PCIE (0000:00:08.0) NSID 1 from core 0: 00:10:54.844 ============================================================================== 00:10:54.844 Range in us Cumulative IO count 00:10:54.844 6911.069 - 6940.858: 0.0277% ( 4) 00:10:54.844 6940.858 - 6970.647: 0.0415% ( 2) 00:10:54.844 6970.647 - 7000.436: 0.0484% ( 1) 00:10:54.844 7000.436 - 7030.225: 0.0691% ( 3) 00:10:54.844 7030.225 - 7060.015: 0.0830% ( 2) 00:10:54.844 7060.015 - 7089.804: 0.0968% ( 2) 00:10:54.844 7089.804 - 7119.593: 0.1452% ( 7) 00:10:54.844 7119.593 - 7149.382: 0.2005% ( 8) 00:10:54.844 7149.382 - 7179.171: 0.3180% ( 17) 00:10:54.844 7179.171 - 7208.960: 0.4909% ( 25) 00:10:54.844 7208.960 - 7238.749: 0.6914% ( 29) 00:10:54.844 7238.749 - 7268.538: 0.8988% ( 30) 00:10:54.844 7268.538 - 7298.327: 1.1753% ( 40) 00:10:54.844 7298.327 - 7328.116: 1.5971% ( 61) 00:10:54.844 7328.116 - 7357.905: 2.0949% ( 72) 00:10:54.844 7357.905 - 7387.695: 2.7102% ( 89) 00:10:54.844 7387.695 - 7417.484: 3.4430% ( 106) 00:10:54.844 7417.484 - 7447.273: 4.1621% ( 104) 00:10:54.844 7447.273 - 7477.062: 4.9779% ( 118) 00:10:54.844 7477.062 - 7506.851: 5.8628% ( 128) 00:10:54.844 7506.851 - 7536.640: 6.7478% ( 128) 00:10:54.844 7536.640 - 7566.429: 7.7434% ( 144) 00:10:54.844 7566.429 - 7596.218: 8.7597% ( 147) 00:10:54.844 7596.218 - 7626.007: 9.8451% ( 157) 00:10:54.844 7626.007 - 7685.585: 12.0783% ( 323) 00:10:54.844 7685.585 - 7745.164: 14.4842% ( 348) 00:10:54.844 7745.164 - 7804.742: 16.8833% ( 347) 00:10:54.844 7804.742 - 7864.320: 19.2409% ( 341) 00:10:54.844 7864.320 - 7923.898: 21.6607% ( 350) 00:10:54.844 7923.898 - 7983.476: 24.0943% ( 352) 00:10:54.844 7983.476 - 8043.055: 26.6247% ( 366) 00:10:54.844 8043.055 - 8102.633: 29.0653% ( 353) 00:10:54.844 8102.633 - 8162.211: 31.5888% ( 365) 00:10:54.844 8162.211 - 8221.789: 34.2160% ( 380) 00:10:54.844 8221.789 - 8281.367: 36.8570% ( 382) 00:10:54.844 8281.367 - 8340.945: 39.6018% ( 397) 00:10:54.844 8340.945 - 8400.524: 42.4087% ( 406) 00:10:54.844 8400.524 - 8460.102: 45.2226% ( 407) 00:10:54.844 8460.102 - 8519.680: 48.0918% ( 415) 00:10:54.844 8519.680 - 8579.258: 51.0647% ( 430) 00:10:54.844 8579.258 - 8638.836: 54.1067% ( 440) 00:10:54.844 8638.836 - 8698.415: 57.1073% ( 434) 00:10:54.844 8698.415 - 8757.993: 60.1286% ( 437) 00:10:54.844 8757.993 - 8817.571: 63.1291% ( 434) 00:10:54.844 8817.571 - 8877.149: 66.1366% ( 435) 00:10:54.844 8877.149 - 8936.727: 69.0058% ( 415) 00:10:54.844 8936.727 - 8996.305: 71.8335% ( 409) 00:10:54.844 8996.305 - 9055.884: 74.4953% ( 385) 00:10:54.844 9055.884 - 9115.462: 77.0257% ( 366) 00:10:54.844 9115.462 - 9175.040: 79.1897% ( 313) 00:10:54.844 9175.040 - 9234.618: 81.0910% ( 275) 00:10:54.844 9234.618 - 9294.196: 82.7710% ( 243) 00:10:54.844 9294.196 - 9353.775: 84.1952% ( 206) 00:10:54.844 9353.775 - 9413.353: 85.3498% ( 167) 00:10:54.844 9413.353 - 9472.931: 86.3800% ( 149) 00:10:54.844 9472.931 - 9532.509: 87.2027% ( 119) 00:10:54.844 9532.509 - 9592.087: 87.9494% ( 108) 00:10:54.844 9592.087 - 9651.665: 88.7168% ( 111) 00:10:54.844 9651.665 - 9711.244: 89.3874% ( 97) 00:10:54.844 9711.244 - 9770.822: 90.0373% ( 94) 00:10:54.844 9770.822 - 9830.400: 90.6665% ( 91) 00:10:54.844 9830.400 - 9889.978: 91.2611% ( 86) 00:10:54.844 9889.978 - 9949.556: 91.8418% ( 84) 00:10:54.844 9949.556 - 10009.135: 92.4640% ( 90) 00:10:54.844 10009.135 - 10068.713: 93.0932% ( 91) 00:10:54.844 10068.713 - 10128.291: 93.6947% ( 87) 00:10:54.844 10128.291 - 10187.869: 94.2824% ( 85) 00:10:54.844 10187.869 - 10247.447: 94.8838% ( 87) 00:10:54.844 10247.447 - 10307.025: 95.4715% ( 85) 00:10:54.844 10307.025 - 10366.604: 96.0592% ( 85) 00:10:54.844 10366.604 - 10426.182: 96.5431% ( 70) 00:10:54.844 10426.182 - 10485.760: 96.9856% ( 64) 00:10:54.844 10485.760 - 10545.338: 97.3590% ( 54) 00:10:54.844 10545.338 - 10604.916: 97.6977% ( 49) 00:10:54.844 10604.916 - 10664.495: 97.9743% ( 40) 00:10:54.844 10664.495 - 10724.073: 98.1886% ( 31) 00:10:54.844 10724.073 - 10783.651: 98.3476% ( 23) 00:10:54.844 10783.651 - 10843.229: 98.4928% ( 21) 00:10:54.844 10843.229 - 10902.807: 98.5896% ( 14) 00:10:54.844 10902.807 - 10962.385: 98.6518% ( 9) 00:10:54.844 10962.385 - 11021.964: 98.6933% ( 6) 00:10:54.844 11021.964 - 11081.542: 98.7417% ( 7) 00:10:54.844 11081.542 - 11141.120: 98.7763% ( 5) 00:10:54.844 11141.120 - 11200.698: 98.8247% ( 7) 00:10:54.844 11200.698 - 11260.276: 98.8731% ( 7) 00:10:54.844 11260.276 - 11319.855: 98.9076% ( 5) 00:10:54.844 11319.855 - 11379.433: 98.9491% ( 6) 00:10:54.844 11379.433 - 11439.011: 98.9906% ( 6) 00:10:54.844 11439.011 - 11498.589: 99.0390% ( 7) 00:10:54.844 11498.589 - 11558.167: 99.0666% ( 4) 00:10:54.844 11558.167 - 11617.745: 99.0874% ( 3) 00:10:54.844 11617.745 - 11677.324: 99.1081% ( 3) 00:10:54.844 11677.324 - 11736.902: 99.1150% ( 1) 00:10:54.844 28001.745 - 28120.902: 99.1358% ( 3) 00:10:54.844 28120.902 - 28240.058: 99.1565% ( 3) 00:10:54.844 28240.058 - 28359.215: 99.1773% ( 3) 00:10:54.844 28359.215 - 28478.371: 99.1980% ( 3) 00:10:54.844 28478.371 - 28597.527: 99.2188% ( 3) 00:10:54.844 28597.527 - 28716.684: 99.2395% ( 3) 00:10:54.844 28716.684 - 28835.840: 99.2602% ( 3) 00:10:54.844 28835.840 - 28954.996: 99.2810% ( 3) 00:10:54.844 28954.996 - 29074.153: 99.3017% ( 3) 00:10:54.844 29074.153 - 29193.309: 99.3225% ( 3) 00:10:54.844 29193.309 - 29312.465: 99.3432% ( 3) 00:10:54.844 29312.465 - 29431.622: 99.3639% ( 3) 00:10:54.844 29431.622 - 29550.778: 99.3847% ( 3) 00:10:54.844 29550.778 - 29669.935: 99.4054% ( 3) 00:10:54.844 29669.935 - 29789.091: 99.4262% ( 3) 00:10:54.844 29789.091 - 29908.247: 99.4469% ( 3) 00:10:54.844 29908.247 - 30027.404: 99.4676% ( 3) 00:10:54.844 30027.404 - 30146.560: 99.4884% ( 3) 00:10:54.844 30146.560 - 30265.716: 99.5091% ( 3) 00:10:54.844 30265.716 - 30384.873: 99.5299% ( 3) 00:10:54.844 30384.873 - 30504.029: 99.5506% ( 3) 00:10:54.844 30504.029 - 30742.342: 99.5921% ( 6) 00:10:54.845 30742.342 - 30980.655: 99.6336% ( 6) 00:10:54.845 30980.655 - 31218.967: 99.6820% ( 7) 00:10:54.845 31218.967 - 31457.280: 99.7235% ( 6) 00:10:54.845 31457.280 - 31695.593: 99.7649% ( 6) 00:10:54.845 31695.593 - 31933.905: 99.8064% ( 6) 00:10:54.845 31933.905 - 32172.218: 99.8479% ( 6) 00:10:54.845 32172.218 - 32410.531: 99.8894% ( 6) 00:10:54.845 32410.531 - 32648.844: 99.9309% ( 6) 00:10:54.845 32648.844 - 32887.156: 99.9723% ( 6) 00:10:54.845 32887.156 - 33125.469: 100.0000% ( 4) 00:10:54.845 00:10:54.845 Latency histogram for PCIE (0000:00:08.0) NSID 2 from core 0: 00:10:54.845 ============================================================================== 00:10:54.845 Range in us Cumulative IO count 00:10:54.845 7030.225 - 7060.015: 0.0346% ( 5) 00:10:54.845 7060.015 - 7089.804: 0.0691% ( 5) 00:10:54.845 7089.804 - 7119.593: 0.1383% ( 10) 00:10:54.845 7119.593 - 7149.382: 0.2143% ( 11) 00:10:54.845 7149.382 - 7179.171: 0.3388% ( 18) 00:10:54.845 7179.171 - 7208.960: 0.5324% ( 28) 00:10:54.845 7208.960 - 7238.749: 0.7605% ( 33) 00:10:54.845 7238.749 - 7268.538: 0.9817% ( 32) 00:10:54.845 7268.538 - 7298.327: 1.2860% ( 44) 00:10:54.845 7298.327 - 7328.116: 1.6731% ( 56) 00:10:54.845 7328.116 - 7357.905: 2.1778% ( 73) 00:10:54.845 7357.905 - 7387.695: 2.7240% ( 79) 00:10:54.845 7387.695 - 7417.484: 3.4569% ( 106) 00:10:54.845 7417.484 - 7447.273: 4.1551% ( 101) 00:10:54.845 7447.273 - 7477.062: 4.9364% ( 113) 00:10:54.845 7477.062 - 7506.851: 5.7799% ( 122) 00:10:54.845 7506.851 - 7536.640: 6.7201% ( 136) 00:10:54.845 7536.640 - 7566.429: 7.7364% ( 147) 00:10:54.845 7566.429 - 7596.218: 8.6698% ( 135) 00:10:54.845 7596.218 - 7626.007: 9.6999% ( 149) 00:10:54.845 7626.007 - 7685.585: 11.9884% ( 331) 00:10:54.845 7685.585 - 7745.164: 14.2699% ( 330) 00:10:54.845 7745.164 - 7804.742: 16.7105% ( 353) 00:10:54.845 7804.742 - 7864.320: 19.1717% ( 356) 00:10:54.845 7864.320 - 7923.898: 21.6330% ( 356) 00:10:54.845 7923.898 - 7983.476: 24.0943% ( 356) 00:10:54.845 7983.476 - 8043.055: 26.6109% ( 364) 00:10:54.845 8043.055 - 8102.633: 29.1206% ( 363) 00:10:54.845 8102.633 - 8162.211: 31.6441% ( 365) 00:10:54.845 8162.211 - 8221.789: 34.2575% ( 378) 00:10:54.845 8221.789 - 8281.367: 36.9123% ( 384) 00:10:54.845 8281.367 - 8340.945: 39.5810% ( 386) 00:10:54.845 8340.945 - 8400.524: 42.3465% ( 400) 00:10:54.845 8400.524 - 8460.102: 45.1466% ( 405) 00:10:54.845 8460.102 - 8519.680: 47.9812% ( 410) 00:10:54.845 8519.680 - 8579.258: 50.9264% ( 426) 00:10:54.845 8579.258 - 8638.836: 53.9062% ( 431) 00:10:54.845 8638.836 - 8698.415: 56.9690% ( 443) 00:10:54.845 8698.415 - 8757.993: 59.9903% ( 437) 00:10:54.845 8757.993 - 8817.571: 63.0324% ( 440) 00:10:54.845 8817.571 - 8877.149: 66.1020% ( 444) 00:10:54.845 8877.149 - 8936.727: 69.0542% ( 427) 00:10:54.845 8936.727 - 8996.305: 71.8404% ( 403) 00:10:54.845 8996.305 - 9055.884: 74.5230% ( 388) 00:10:54.845 9055.884 - 9115.462: 76.9773% ( 355) 00:10:54.845 9115.462 - 9175.040: 79.2381% ( 327) 00:10:54.845 9175.040 - 9234.618: 81.1117% ( 271) 00:10:54.845 9234.618 - 9294.196: 82.8056% ( 245) 00:10:54.845 9294.196 - 9353.775: 84.2298% ( 206) 00:10:54.845 9353.775 - 9413.353: 85.4535% ( 177) 00:10:54.845 9413.353 - 9472.931: 86.4837% ( 149) 00:10:54.845 9472.931 - 9532.509: 87.3410% ( 124) 00:10:54.845 9532.509 - 9592.087: 88.0531% ( 103) 00:10:54.845 9592.087 - 9651.665: 88.7514% ( 101) 00:10:54.845 9651.665 - 9711.244: 89.4013% ( 94) 00:10:54.845 9711.244 - 9770.822: 90.0373% ( 92) 00:10:54.845 9770.822 - 9830.400: 90.6250% ( 85) 00:10:54.845 9830.400 - 9889.978: 91.2541% ( 91) 00:10:54.845 9889.978 - 9949.556: 91.8695% ( 89) 00:10:54.845 9949.556 - 10009.135: 92.5124% ( 93) 00:10:54.845 10009.135 - 10068.713: 93.1347% ( 90) 00:10:54.845 10068.713 - 10128.291: 93.7500% ( 89) 00:10:54.845 10128.291 - 10187.869: 94.3446% ( 86) 00:10:54.845 10187.869 - 10247.447: 94.9253% ( 84) 00:10:54.845 10247.447 - 10307.025: 95.5268% ( 87) 00:10:54.845 10307.025 - 10366.604: 96.0523% ( 76) 00:10:54.845 10366.604 - 10426.182: 96.5501% ( 72) 00:10:54.845 10426.182 - 10485.760: 96.9856% ( 63) 00:10:54.845 10485.760 - 10545.338: 97.3659% ( 55) 00:10:54.845 10545.338 - 10604.916: 97.7116% ( 50) 00:10:54.845 10604.916 - 10664.495: 97.9881% ( 40) 00:10:54.845 10664.495 - 10724.073: 98.2024% ( 31) 00:10:54.845 10724.073 - 10783.651: 98.3684% ( 24) 00:10:54.845 10783.651 - 10843.229: 98.5343% ( 24) 00:10:54.845 10843.229 - 10902.807: 98.6311% ( 14) 00:10:54.845 10902.807 - 10962.385: 98.6933% ( 9) 00:10:54.845 10962.385 - 11021.964: 98.7417% ( 7) 00:10:54.845 11021.964 - 11081.542: 98.7832% ( 6) 00:10:54.845 11081.542 - 11141.120: 98.8247% ( 6) 00:10:54.845 11141.120 - 11200.698: 98.8592% ( 5) 00:10:54.845 11200.698 - 11260.276: 98.9007% ( 6) 00:10:54.845 11260.276 - 11319.855: 98.9491% ( 7) 00:10:54.845 11319.855 - 11379.433: 98.9906% ( 6) 00:10:54.845 11379.433 - 11439.011: 99.0390% ( 7) 00:10:54.845 11439.011 - 11498.589: 99.0874% ( 7) 00:10:54.845 11498.589 - 11558.167: 99.1081% ( 3) 00:10:54.845 11558.167 - 11617.745: 99.1150% ( 1) 00:10:54.845 26095.244 - 26214.400: 99.1220% ( 1) 00:10:54.845 26214.400 - 26333.556: 99.1358% ( 2) 00:10:54.845 26333.556 - 26452.713: 99.1634% ( 4) 00:10:54.845 26452.713 - 26571.869: 99.1842% ( 3) 00:10:54.845 26571.869 - 26691.025: 99.2049% ( 3) 00:10:54.845 26691.025 - 26810.182: 99.2257% ( 3) 00:10:54.845 26810.182 - 26929.338: 99.2464% ( 3) 00:10:54.845 26929.338 - 27048.495: 99.2671% ( 3) 00:10:54.845 27048.495 - 27167.651: 99.2879% ( 3) 00:10:54.845 27167.651 - 27286.807: 99.3086% ( 3) 00:10:54.845 27286.807 - 27405.964: 99.3294% ( 3) 00:10:54.845 27405.964 - 27525.120: 99.3501% ( 3) 00:10:54.845 27525.120 - 27644.276: 99.3709% ( 3) 00:10:54.845 27644.276 - 27763.433: 99.3916% ( 3) 00:10:54.845 27763.433 - 27882.589: 99.4123% ( 3) 00:10:54.845 27882.589 - 28001.745: 99.4331% ( 3) 00:10:54.845 28001.745 - 28120.902: 99.4469% ( 2) 00:10:54.845 28120.902 - 28240.058: 99.4676% ( 3) 00:10:54.845 28240.058 - 28359.215: 99.4884% ( 3) 00:10:54.845 28359.215 - 28478.371: 99.5091% ( 3) 00:10:54.845 28478.371 - 28597.527: 99.5299% ( 3) 00:10:54.845 28597.527 - 28716.684: 99.5506% ( 3) 00:10:54.845 28716.684 - 28835.840: 99.5713% ( 3) 00:10:54.845 28835.840 - 28954.996: 99.5921% ( 3) 00:10:54.845 28954.996 - 29074.153: 99.6128% ( 3) 00:10:54.845 29074.153 - 29193.309: 99.6336% ( 3) 00:10:54.845 29193.309 - 29312.465: 99.6543% ( 3) 00:10:54.845 29312.465 - 29431.622: 99.6820% ( 4) 00:10:54.845 29431.622 - 29550.778: 99.6958% ( 2) 00:10:54.845 29550.778 - 29669.935: 99.7165% ( 3) 00:10:54.845 29669.935 - 29789.091: 99.7373% ( 3) 00:10:54.845 29789.091 - 29908.247: 99.7580% ( 3) 00:10:54.845 29908.247 - 30027.404: 99.7857% ( 4) 00:10:54.845 30027.404 - 30146.560: 99.8064% ( 3) 00:10:54.845 30146.560 - 30265.716: 99.8202% ( 2) 00:10:54.845 30265.716 - 30384.873: 99.8479% ( 4) 00:10:54.845 30384.873 - 30504.029: 99.8686% ( 3) 00:10:54.845 30504.029 - 30742.342: 99.9101% ( 6) 00:10:54.845 30742.342 - 30980.655: 99.9516% ( 6) 00:10:54.845 30980.655 - 31218.967: 99.9931% ( 6) 00:10:54.845 31218.967 - 31457.280: 100.0000% ( 1) 00:10:54.845 00:10:54.845 Latency histogram for PCIE (0000:00:08.0) NSID 3 from core 0: 00:10:54.845 ============================================================================== 00:10:54.845 Range in us Cumulative IO count 00:10:54.845 6970.647 - 7000.436: 0.0207% ( 3) 00:10:54.845 7000.436 - 7030.225: 0.0277% ( 1) 00:10:54.845 7030.225 - 7060.015: 0.0415% ( 2) 00:10:54.845 7060.015 - 7089.804: 0.0899% ( 7) 00:10:54.845 7089.804 - 7119.593: 0.1521% ( 9) 00:10:54.845 7119.593 - 7149.382: 0.2282% ( 11) 00:10:54.845 7149.382 - 7179.171: 0.3249% ( 14) 00:10:54.845 7179.171 - 7208.960: 0.4494% ( 18) 00:10:54.845 7208.960 - 7238.749: 0.6291% ( 26) 00:10:54.845 7238.749 - 7268.538: 0.8919% ( 38) 00:10:54.845 7268.538 - 7298.327: 1.2306% ( 49) 00:10:54.845 7298.327 - 7328.116: 1.6662% ( 63) 00:10:54.845 7328.116 - 7357.905: 2.1294% ( 67) 00:10:54.845 7357.905 - 7387.695: 2.7447% ( 89) 00:10:54.845 7387.695 - 7417.484: 3.3877% ( 93) 00:10:54.845 7417.484 - 7447.273: 4.2174% ( 120) 00:10:54.845 7447.273 - 7477.062: 5.0401% ( 119) 00:10:54.845 7477.062 - 7506.851: 5.9181% ( 127) 00:10:54.845 7506.851 - 7536.640: 6.7685% ( 123) 00:10:54.845 7536.640 - 7566.429: 7.7503% ( 142) 00:10:54.845 7566.429 - 7596.218: 8.7597% ( 146) 00:10:54.845 7596.218 - 7626.007: 9.8313% ( 155) 00:10:54.845 7626.007 - 7685.585: 12.0091% ( 315) 00:10:54.845 7685.585 - 7745.164: 14.2492% ( 324) 00:10:54.845 7745.164 - 7804.742: 16.6828% ( 352) 00:10:54.845 7804.742 - 7864.320: 19.1372% ( 355) 00:10:54.845 7864.320 - 7923.898: 21.6330% ( 361) 00:10:54.845 7923.898 - 7983.476: 24.1496% ( 364) 00:10:54.845 7983.476 - 8043.055: 26.6731% ( 365) 00:10:54.845 8043.055 - 8102.633: 29.1690% ( 361) 00:10:54.845 8102.633 - 8162.211: 31.6856% ( 364) 00:10:54.845 8162.211 - 8221.789: 34.3473% ( 385) 00:10:54.845 8221.789 - 8281.367: 36.9815% ( 381) 00:10:54.845 8281.367 - 8340.945: 39.6156% ( 381) 00:10:54.845 8340.945 - 8400.524: 42.4710% ( 413) 00:10:54.845 8400.524 - 8460.102: 45.3609% ( 418) 00:10:54.845 8460.102 - 8519.680: 48.2577% ( 419) 00:10:54.845 8519.680 - 8579.258: 51.2306% ( 430) 00:10:54.845 8579.258 - 8638.836: 54.2312% ( 434) 00:10:54.845 8638.836 - 8698.415: 57.2732% ( 440) 00:10:54.845 8698.415 - 8757.993: 60.2945% ( 437) 00:10:54.845 8757.993 - 8817.571: 63.3780% ( 446) 00:10:54.845 8817.571 - 8877.149: 66.3924% ( 436) 00:10:54.845 8877.149 - 8936.727: 69.3791% ( 432) 00:10:54.845 8936.727 - 8996.305: 72.2276% ( 412) 00:10:54.845 8996.305 - 9055.884: 74.9793% ( 398) 00:10:54.845 9055.884 - 9115.462: 77.4959% ( 364) 00:10:54.845 9115.462 - 9175.040: 79.6668% ( 314) 00:10:54.846 9175.040 - 9234.618: 81.6164% ( 282) 00:10:54.846 9234.618 - 9294.196: 83.1789% ( 226) 00:10:54.846 9294.196 - 9353.775: 84.6101% ( 207) 00:10:54.846 9353.775 - 9413.353: 85.7992% ( 172) 00:10:54.846 9413.353 - 9472.931: 86.8501% ( 152) 00:10:54.846 9472.931 - 9532.509: 87.6867% ( 121) 00:10:54.846 9532.509 - 9592.087: 88.3919% ( 102) 00:10:54.846 9592.087 - 9651.665: 88.9934% ( 87) 00:10:54.846 9651.665 - 9711.244: 89.5810% ( 85) 00:10:54.846 9711.244 - 9770.822: 90.1618% ( 84) 00:10:54.846 9770.822 - 9830.400: 90.7218% ( 81) 00:10:54.846 9830.400 - 9889.978: 91.3095% ( 85) 00:10:54.846 9889.978 - 9949.556: 91.8833% ( 83) 00:10:54.846 9949.556 - 10009.135: 92.4226% ( 78) 00:10:54.846 10009.135 - 10068.713: 93.0310% ( 88) 00:10:54.846 10068.713 - 10128.291: 93.5633% ( 77) 00:10:54.846 10128.291 - 10187.869: 94.1579% ( 86) 00:10:54.846 10187.869 - 10247.447: 94.7248% ( 82) 00:10:54.846 10247.447 - 10307.025: 95.2710% ( 79) 00:10:54.846 10307.025 - 10366.604: 95.8172% ( 79) 00:10:54.846 10366.604 - 10426.182: 96.2597% ( 64) 00:10:54.846 10426.182 - 10485.760: 96.6676% ( 59) 00:10:54.846 10485.760 - 10545.338: 97.0548% ( 56) 00:10:54.846 10545.338 - 10604.916: 97.3520% ( 43) 00:10:54.846 10604.916 - 10664.495: 97.5733% ( 32) 00:10:54.846 10664.495 - 10724.073: 97.7530% ( 26) 00:10:54.846 10724.073 - 10783.651: 97.9328% ( 26) 00:10:54.846 10783.651 - 10843.229: 98.0780% ( 21) 00:10:54.846 10843.229 - 10902.807: 98.2024% ( 18) 00:10:54.846 10902.807 - 10962.385: 98.2923% ( 13) 00:10:54.846 10962.385 - 11021.964: 98.3960% ( 15) 00:10:54.846 11021.964 - 11081.542: 98.4928% ( 14) 00:10:54.846 11081.542 - 11141.120: 98.5896% ( 14) 00:10:54.846 11141.120 - 11200.698: 98.6795% ( 13) 00:10:54.846 11200.698 - 11260.276: 98.7694% ( 13) 00:10:54.846 11260.276 - 11319.855: 98.8454% ( 11) 00:10:54.846 11319.855 - 11379.433: 98.9007% ( 8) 00:10:54.846 11379.433 - 11439.011: 98.9491% ( 7) 00:10:54.846 11439.011 - 11498.589: 98.9768% ( 4) 00:10:54.846 11498.589 - 11558.167: 98.9975% ( 3) 00:10:54.846 11558.167 - 11617.745: 99.0252% ( 4) 00:10:54.846 11617.745 - 11677.324: 99.0459% ( 3) 00:10:54.846 11677.324 - 11736.902: 99.0736% ( 4) 00:10:54.846 11736.902 - 11796.480: 99.1012% ( 4) 00:10:54.846 11796.480 - 11856.058: 99.1150% ( 2) 00:10:54.846 24307.898 - 24427.055: 99.1220% ( 1) 00:10:54.846 24427.055 - 24546.211: 99.1427% ( 3) 00:10:54.846 24546.211 - 24665.367: 99.1634% ( 3) 00:10:54.846 24665.367 - 24784.524: 99.1842% ( 3) 00:10:54.846 24784.524 - 24903.680: 99.1980% ( 2) 00:10:54.846 24903.680 - 25022.836: 99.2257% ( 4) 00:10:54.846 25022.836 - 25141.993: 99.2464% ( 3) 00:10:54.846 25141.993 - 25261.149: 99.2671% ( 3) 00:10:54.846 25261.149 - 25380.305: 99.2879% ( 3) 00:10:54.846 25380.305 - 25499.462: 99.3017% ( 2) 00:10:54.846 25499.462 - 25618.618: 99.3294% ( 4) 00:10:54.846 25618.618 - 25737.775: 99.3501% ( 3) 00:10:54.846 25737.775 - 25856.931: 99.3709% ( 3) 00:10:54.846 25856.931 - 25976.087: 99.3916% ( 3) 00:10:54.846 25976.087 - 26095.244: 99.4123% ( 3) 00:10:54.846 26095.244 - 26214.400: 99.4331% ( 3) 00:10:54.846 26214.400 - 26333.556: 99.4538% ( 3) 00:10:54.846 26333.556 - 26452.713: 99.4815% ( 4) 00:10:54.846 26452.713 - 26571.869: 99.5022% ( 3) 00:10:54.846 26571.869 - 26691.025: 99.5230% ( 3) 00:10:54.846 26691.025 - 26810.182: 99.5437% ( 3) 00:10:54.846 26810.182 - 26929.338: 99.5644% ( 3) 00:10:54.846 26929.338 - 27048.495: 99.5852% ( 3) 00:10:54.846 27048.495 - 27167.651: 99.5990% ( 2) 00:10:54.846 27167.651 - 27286.807: 99.6267% ( 4) 00:10:54.846 27286.807 - 27405.964: 99.6405% ( 2) 00:10:54.846 27405.964 - 27525.120: 99.6612% ( 3) 00:10:54.846 27525.120 - 27644.276: 99.6820% ( 3) 00:10:54.846 27644.276 - 27763.433: 99.7027% ( 3) 00:10:54.846 27763.433 - 27882.589: 99.7235% ( 3) 00:10:54.846 27882.589 - 28001.745: 99.7442% ( 3) 00:10:54.846 28001.745 - 28120.902: 99.7649% ( 3) 00:10:54.846 28120.902 - 28240.058: 99.7926% ( 4) 00:10:54.846 28240.058 - 28359.215: 99.8064% ( 2) 00:10:54.846 28359.215 - 28478.371: 99.8341% ( 4) 00:10:54.846 28478.371 - 28597.527: 99.8548% ( 3) 00:10:54.846 28597.527 - 28716.684: 99.8756% ( 3) 00:10:54.846 28716.684 - 28835.840: 99.8963% ( 3) 00:10:54.846 28835.840 - 28954.996: 99.9170% ( 3) 00:10:54.846 28954.996 - 29074.153: 99.9378% ( 3) 00:10:54.846 29074.153 - 29193.309: 99.9585% ( 3) 00:10:54.846 29193.309 - 29312.465: 99.9793% ( 3) 00:10:54.846 29312.465 - 29431.622: 100.0000% ( 3) 00:10:54.846 00:10:54.846 04:51:01 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:10:56.221 Initializing NVMe Controllers 00:10:56.221 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:56.221 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:56.221 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:56.221 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:56.221 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:10:56.221 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:10:56.221 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:10:56.221 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:10:56.221 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:10:56.221 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:10:56.221 Initialization complete. Launching workers. 00:10:56.221 ======================================================== 00:10:56.221 Latency(us) 00:10:56.221 Device Information : IOPS MiB/s Average min max 00:10:56.221 PCIE (0000:00:06.0) NSID 1 from core 0: 10591.91 124.12 12082.23 8758.95 36745.46 00:10:56.221 PCIE (0000:00:07.0) NSID 1 from core 0: 10591.91 124.12 12078.16 8979.01 35626.12 00:10:56.221 PCIE (0000:00:09.0) NSID 1 from core 0: 10591.91 124.12 12073.56 8990.45 35381.93 00:10:56.221 PCIE (0000:00:08.0) NSID 1 from core 0: 10591.91 124.12 12067.98 8944.75 33555.96 00:10:56.221 PCIE (0000:00:08.0) NSID 2 from core 0: 10591.91 124.12 12063.16 8979.04 32268.43 00:10:56.221 PCIE (0000:00:08.0) NSID 3 from core 0: 10591.91 124.12 12064.84 8797.77 30934.95 00:10:56.221 ======================================================== 00:10:56.221 Total : 63551.44 744.74 12071.66 8758.95 36745.46 00:10:56.221 00:10:56.221 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:10:56.221 ================================================================================= 00:10:56.221 1.00000% : 9234.618us 00:10:56.221 10.00000% : 10068.713us 00:10:56.221 25.00000% : 10783.651us 00:10:56.221 50.00000% : 11736.902us 00:10:56.221 75.00000% : 12809.309us 00:10:56.221 90.00000% : 14060.451us 00:10:56.221 95.00000% : 14715.811us 00:10:56.221 98.00000% : 15728.640us 00:10:56.221 99.00000% : 31695.593us 00:10:56.221 99.50000% : 34317.033us 00:10:56.221 99.90000% : 36223.535us 00:10:56.221 99.99000% : 36700.160us 00:10:56.221 99.99900% : 36938.473us 00:10:56.221 99.99990% : 36938.473us 00:10:56.221 99.99999% : 36938.473us 00:10:56.221 00:10:56.222 Summary latency data for PCIE (0000:00:07.0) NSID 1 from core 0: 00:10:56.222 ================================================================================= 00:10:56.222 1.00000% : 9413.353us 00:10:56.222 10.00000% : 10187.869us 00:10:56.222 25.00000% : 10843.229us 00:10:56.222 50.00000% : 11677.324us 00:10:56.222 75.00000% : 12749.731us 00:10:56.222 90.00000% : 13941.295us 00:10:56.222 95.00000% : 14656.233us 00:10:56.222 98.00000% : 15609.484us 00:10:56.222 99.00000% : 30980.655us 00:10:56.222 99.50000% : 33602.095us 00:10:56.222 99.90000% : 35270.284us 00:10:56.222 99.99000% : 35746.909us 00:10:56.222 99.99900% : 35746.909us 00:10:56.222 99.99990% : 35746.909us 00:10:56.222 99.99999% : 35746.909us 00:10:56.222 00:10:56.222 Summary latency data for PCIE (0000:00:09.0) NSID 1 from core 0: 00:10:56.222 ================================================================================= 00:10:56.222 1.00000% : 9353.775us 00:10:56.222 10.00000% : 10187.869us 00:10:56.222 25.00000% : 10843.229us 00:10:56.222 50.00000% : 11736.902us 00:10:56.222 75.00000% : 12809.309us 00:10:56.222 90.00000% : 13941.295us 00:10:56.222 95.00000% : 14537.076us 00:10:56.222 98.00000% : 15371.171us 00:10:56.222 99.00000% : 31218.967us 00:10:56.222 99.50000% : 33602.095us 00:10:56.222 99.90000% : 35031.971us 00:10:56.222 99.99000% : 35508.596us 00:10:56.222 99.99900% : 35508.596us 00:10:56.222 99.99990% : 35508.596us 00:10:56.222 99.99999% : 35508.596us 00:10:56.222 00:10:56.222 Summary latency data for PCIE (0000:00:08.0) NSID 1 from core 0: 00:10:56.222 ================================================================================= 00:10:56.222 1.00000% : 9353.775us 00:10:56.222 10.00000% : 10187.869us 00:10:56.222 25.00000% : 10843.229us 00:10:56.222 50.00000% : 11736.902us 00:10:56.222 75.00000% : 12809.309us 00:10:56.222 90.00000% : 13941.295us 00:10:56.222 95.00000% : 14596.655us 00:10:56.222 98.00000% : 15728.640us 00:10:56.222 99.00000% : 29669.935us 00:10:56.222 99.50000% : 31695.593us 00:10:56.222 99.90000% : 33363.782us 00:10:56.222 99.99000% : 33602.095us 00:10:56.222 99.99900% : 33602.095us 00:10:56.222 99.99990% : 33602.095us 00:10:56.222 99.99999% : 33602.095us 00:10:56.222 00:10:56.222 Summary latency data for PCIE (0000:00:08.0) NSID 2 from core 0: 00:10:56.222 ================================================================================= 00:10:56.222 1.00000% : 9413.353us 00:10:56.222 10.00000% : 10187.869us 00:10:56.222 25.00000% : 10843.229us 00:10:56.222 50.00000% : 11677.324us 00:10:56.222 75.00000% : 12749.731us 00:10:56.222 90.00000% : 14060.451us 00:10:56.222 95.00000% : 14715.811us 00:10:56.222 98.00000% : 16086.109us 00:10:56.222 99.00000% : 28478.371us 00:10:56.222 99.50000% : 30384.873us 00:10:56.222 99.90000% : 31933.905us 00:10:56.222 99.99000% : 32410.531us 00:10:56.222 99.99900% : 32410.531us 00:10:56.222 99.99990% : 32410.531us 00:10:56.222 99.99999% : 32410.531us 00:10:56.222 00:10:56.222 Summary latency data for PCIE (0000:00:08.0) NSID 3 from core 0: 00:10:56.222 ================================================================================= 00:10:56.222 1.00000% : 9353.775us 00:10:56.222 10.00000% : 10187.869us 00:10:56.222 25.00000% : 10783.651us 00:10:56.222 50.00000% : 11677.324us 00:10:56.222 75.00000% : 12749.731us 00:10:56.222 90.00000% : 14060.451us 00:10:56.222 95.00000% : 14775.389us 00:10:56.222 98.00000% : 17992.611us 00:10:56.222 99.00000% : 27167.651us 00:10:56.222 99.50000% : 28954.996us 00:10:56.222 99.90000% : 30742.342us 00:10:56.222 99.99000% : 30980.655us 00:10:56.222 99.99900% : 30980.655us 00:10:56.222 99.99990% : 30980.655us 00:10:56.222 99.99999% : 30980.655us 00:10:56.222 00:10:56.222 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:10:56.222 ============================================================================== 00:10:56.222 Range in us Cumulative IO count 00:10:56.222 8757.993 - 8817.571: 0.1318% ( 14) 00:10:56.222 8817.571 - 8877.149: 0.2071% ( 8) 00:10:56.222 8877.149 - 8936.727: 0.3012% ( 10) 00:10:56.222 8936.727 - 8996.305: 0.4612% ( 17) 00:10:56.222 8996.305 - 9055.884: 0.6024% ( 15) 00:10:56.222 9055.884 - 9115.462: 0.7907% ( 20) 00:10:56.222 9115.462 - 9175.040: 0.9507% ( 17) 00:10:56.222 9175.040 - 9234.618: 1.2048% ( 27) 00:10:56.222 9234.618 - 9294.196: 1.6284% ( 45) 00:10:56.222 9294.196 - 9353.775: 2.0614% ( 46) 00:10:56.222 9353.775 - 9413.353: 2.3720% ( 33) 00:10:56.222 9413.353 - 9472.931: 2.7579% ( 41) 00:10:56.222 9472.931 - 9532.509: 3.2662% ( 54) 00:10:56.222 9532.509 - 9592.087: 3.8592% ( 63) 00:10:56.222 9592.087 - 9651.665: 4.4145% ( 59) 00:10:56.222 9651.665 - 9711.244: 5.2240% ( 86) 00:10:56.222 9711.244 - 9770.822: 5.9111% ( 73) 00:10:56.222 9770.822 - 9830.400: 6.6830% ( 82) 00:10:56.222 9830.400 - 9889.978: 7.6713% ( 105) 00:10:56.222 9889.978 - 9949.556: 8.6220% ( 101) 00:10:56.222 9949.556 - 10009.135: 9.6762% ( 112) 00:10:56.222 10009.135 - 10068.713: 10.8151% ( 121) 00:10:56.222 10068.713 - 10128.291: 11.9823% ( 124) 00:10:56.222 10128.291 - 10187.869: 13.1495% ( 124) 00:10:56.222 10187.869 - 10247.447: 14.3637% ( 129) 00:10:56.222 10247.447 - 10307.025: 15.5968% ( 131) 00:10:56.222 10307.025 - 10366.604: 16.8298% ( 131) 00:10:56.222 10366.604 - 10426.182: 18.0629% ( 131) 00:10:56.222 10426.182 - 10485.760: 19.3806% ( 140) 00:10:56.222 10485.760 - 10545.338: 20.6043% ( 130) 00:10:56.222 10545.338 - 10604.916: 21.8656% ( 134) 00:10:56.222 10604.916 - 10664.495: 23.2304% ( 145) 00:10:56.222 10664.495 - 10724.073: 24.6894% ( 155) 00:10:56.222 10724.073 - 10783.651: 26.1672% ( 157) 00:10:56.222 10783.651 - 10843.229: 27.6167% ( 154) 00:10:56.222 10843.229 - 10902.807: 29.2169% ( 170) 00:10:56.222 10902.807 - 10962.385: 30.6099% ( 148) 00:10:56.222 10962.385 - 11021.964: 32.1160% ( 160) 00:10:56.222 11021.964 - 11081.542: 33.6126% ( 159) 00:10:56.222 11081.542 - 11141.120: 35.1468% ( 163) 00:10:56.222 11141.120 - 11200.698: 36.8129% ( 177) 00:10:56.222 11200.698 - 11260.276: 38.2812% ( 156) 00:10:56.222 11260.276 - 11319.855: 39.8438% ( 166) 00:10:56.222 11319.855 - 11379.433: 41.3968% ( 165) 00:10:56.222 11379.433 - 11439.011: 43.0817% ( 179) 00:10:56.222 11439.011 - 11498.589: 44.6442% ( 166) 00:10:56.222 11498.589 - 11558.167: 46.2161% ( 167) 00:10:56.222 11558.167 - 11617.745: 47.7504% ( 163) 00:10:56.222 11617.745 - 11677.324: 49.4164% ( 177) 00:10:56.222 11677.324 - 11736.902: 50.9601% ( 164) 00:10:56.222 11736.902 - 11796.480: 52.4473% ( 158) 00:10:56.222 11796.480 - 11856.058: 54.0286% ( 168) 00:10:56.222 11856.058 - 11915.636: 55.5817% ( 165) 00:10:56.222 11915.636 - 11975.215: 57.2101% ( 173) 00:10:56.222 11975.215 - 12034.793: 58.7067% ( 159) 00:10:56.222 12034.793 - 12094.371: 60.1092% ( 149) 00:10:56.222 12094.371 - 12153.949: 61.7941% ( 179) 00:10:56.222 12153.949 - 12213.527: 63.2154% ( 151) 00:10:56.222 12213.527 - 12273.105: 64.6273% ( 150) 00:10:56.222 12273.105 - 12332.684: 66.0297% ( 149) 00:10:56.222 12332.684 - 12392.262: 67.3381% ( 139) 00:10:56.222 12392.262 - 12451.840: 68.6370% ( 138) 00:10:56.222 12451.840 - 12511.418: 69.8983% ( 134) 00:10:56.222 12511.418 - 12570.996: 71.1032% ( 128) 00:10:56.222 12570.996 - 12630.575: 72.2797% ( 125) 00:10:56.222 12630.575 - 12690.153: 73.3245% ( 111) 00:10:56.222 12690.153 - 12749.731: 74.2658% ( 100) 00:10:56.222 12749.731 - 12809.309: 75.4047% ( 121) 00:10:56.222 12809.309 - 12868.887: 76.3648% ( 102) 00:10:56.222 12868.887 - 12928.465: 77.2779% ( 97) 00:10:56.222 12928.465 - 12988.044: 78.1815% ( 96) 00:10:56.222 12988.044 - 13047.622: 79.0757% ( 95) 00:10:56.222 13047.622 - 13107.200: 79.8663% ( 84) 00:10:56.222 13107.200 - 13166.778: 80.7417% ( 93) 00:10:56.222 13166.778 - 13226.356: 81.5512% ( 86) 00:10:56.222 13226.356 - 13285.935: 82.4078% ( 91) 00:10:56.222 13285.935 - 13345.513: 83.0196% ( 65) 00:10:56.222 13345.513 - 13405.091: 83.7820% ( 81) 00:10:56.222 13405.091 - 13464.669: 84.5727% ( 84) 00:10:56.222 13464.669 - 13524.247: 85.2033% ( 67) 00:10:56.222 13524.247 - 13583.825: 85.9281% ( 77) 00:10:56.222 13583.825 - 13643.404: 86.5305% ( 64) 00:10:56.222 13643.404 - 13702.982: 87.1423% ( 65) 00:10:56.222 13702.982 - 13762.560: 87.6224% ( 51) 00:10:56.222 13762.560 - 13822.138: 88.1589% ( 57) 00:10:56.222 13822.138 - 13881.716: 88.7425% ( 62) 00:10:56.222 13881.716 - 13941.295: 89.2602% ( 55) 00:10:56.222 13941.295 - 14000.873: 89.8061% ( 58) 00:10:56.222 14000.873 - 14060.451: 90.3426% ( 57) 00:10:56.222 14060.451 - 14120.029: 90.7944% ( 48) 00:10:56.222 14120.029 - 14179.607: 91.3121% ( 55) 00:10:56.222 14179.607 - 14239.185: 91.7922% ( 51) 00:10:56.222 14239.185 - 14298.764: 92.3758% ( 62) 00:10:56.222 14298.764 - 14358.342: 92.8464% ( 50) 00:10:56.222 14358.342 - 14417.920: 93.3735% ( 56) 00:10:56.222 14417.920 - 14477.498: 93.7594% ( 41) 00:10:56.222 14477.498 - 14537.076: 94.1642% ( 43) 00:10:56.222 14537.076 - 14596.655: 94.5501% ( 41) 00:10:56.222 14596.655 - 14656.233: 94.8701% ( 34) 00:10:56.222 14656.233 - 14715.811: 95.2466% ( 40) 00:10:56.222 14715.811 - 14775.389: 95.5008% ( 27) 00:10:56.222 14775.389 - 14834.967: 95.7455% ( 26) 00:10:56.222 14834.967 - 14894.545: 95.9808% ( 25) 00:10:56.222 14894.545 - 14954.124: 96.1879% ( 22) 00:10:56.222 14954.124 - 15013.702: 96.4138% ( 24) 00:10:56.222 15013.702 - 15073.280: 96.5738% ( 17) 00:10:56.222 15073.280 - 15132.858: 96.7903% ( 23) 00:10:56.222 15132.858 - 15192.436: 96.9127% ( 13) 00:10:56.222 15192.436 - 15252.015: 97.0350% ( 13) 00:10:56.222 15252.015 - 15371.171: 97.3080% ( 29) 00:10:56.222 15371.171 - 15490.327: 97.5621% ( 27) 00:10:56.222 15490.327 - 15609.484: 97.8257% ( 28) 00:10:56.222 15609.484 - 15728.640: 98.0610% ( 25) 00:10:56.222 15728.640 - 15847.796: 98.2398% ( 19) 00:10:56.222 15847.796 - 15966.953: 98.4187% ( 19) 00:10:56.222 15966.953 - 16086.109: 98.5222% ( 11) 00:10:56.222 16086.109 - 16205.265: 98.6069% ( 9) 00:10:56.222 16205.265 - 16324.422: 98.6822% ( 8) 00:10:56.223 16324.422 - 16443.578: 98.7199% ( 4) 00:10:56.223 16443.578 - 16562.735: 98.7764% ( 6) 00:10:56.223 16562.735 - 16681.891: 98.7952% ( 2) 00:10:56.223 30504.029 - 30742.342: 98.8140% ( 2) 00:10:56.223 30742.342 - 30980.655: 98.8705% ( 6) 00:10:56.223 30980.655 - 31218.967: 98.9175% ( 5) 00:10:56.223 31218.967 - 31457.280: 98.9552% ( 4) 00:10:56.223 31457.280 - 31695.593: 99.0023% ( 5) 00:10:56.223 31695.593 - 31933.905: 99.0493% ( 5) 00:10:56.223 31933.905 - 32172.218: 99.1058% ( 6) 00:10:56.223 32172.218 - 32410.531: 99.1434% ( 4) 00:10:56.223 32410.531 - 32648.844: 99.1999% ( 6) 00:10:56.223 32648.844 - 32887.156: 99.2376% ( 4) 00:10:56.223 32887.156 - 33125.469: 99.2752% ( 4) 00:10:56.223 33125.469 - 33363.782: 99.3223% ( 5) 00:10:56.223 33363.782 - 33602.095: 99.3694% ( 5) 00:10:56.223 33602.095 - 33840.407: 99.4164% ( 5) 00:10:56.223 33840.407 - 34078.720: 99.4635% ( 5) 00:10:56.223 34078.720 - 34317.033: 99.5105% ( 5) 00:10:56.223 34317.033 - 34555.345: 99.5576% ( 5) 00:10:56.223 34555.345 - 34793.658: 99.5953% ( 4) 00:10:56.223 34793.658 - 35031.971: 99.6517% ( 6) 00:10:56.223 35031.971 - 35270.284: 99.6894% ( 4) 00:10:56.223 35270.284 - 35508.596: 99.7459% ( 6) 00:10:56.223 35508.596 - 35746.909: 99.7929% ( 5) 00:10:56.223 35746.909 - 35985.222: 99.8494% ( 6) 00:10:56.223 35985.222 - 36223.535: 99.9059% ( 6) 00:10:56.223 36223.535 - 36461.847: 99.9529% ( 5) 00:10:56.223 36461.847 - 36700.160: 99.9906% ( 4) 00:10:56.223 36700.160 - 36938.473: 100.0000% ( 1) 00:10:56.223 00:10:56.223 Latency histogram for PCIE (0000:00:07.0) NSID 1 from core 0: 00:10:56.223 ============================================================================== 00:10:56.223 Range in us Cumulative IO count 00:10:56.223 8936.727 - 8996.305: 0.0094% ( 1) 00:10:56.223 8996.305 - 9055.884: 0.0659% ( 6) 00:10:56.223 9055.884 - 9115.462: 0.1883% ( 13) 00:10:56.223 9115.462 - 9175.040: 0.3106% ( 13) 00:10:56.223 9175.040 - 9234.618: 0.4518% ( 15) 00:10:56.223 9234.618 - 9294.196: 0.6965% ( 26) 00:10:56.223 9294.196 - 9353.775: 0.9224% ( 24) 00:10:56.223 9353.775 - 9413.353: 1.2048% ( 30) 00:10:56.223 9413.353 - 9472.931: 1.4495% ( 26) 00:10:56.223 9472.931 - 9532.509: 1.7319% ( 30) 00:10:56.223 9532.509 - 9592.087: 2.1367% ( 43) 00:10:56.223 9592.087 - 9651.665: 2.6073% ( 50) 00:10:56.223 9651.665 - 9711.244: 3.0685% ( 49) 00:10:56.223 9711.244 - 9770.822: 3.5956% ( 56) 00:10:56.223 9770.822 - 9830.400: 4.3581% ( 81) 00:10:56.223 9830.400 - 9889.978: 5.1581% ( 85) 00:10:56.223 9889.978 - 9949.556: 5.9488% ( 84) 00:10:56.223 9949.556 - 10009.135: 6.7959% ( 90) 00:10:56.223 10009.135 - 10068.713: 7.7372% ( 100) 00:10:56.223 10068.713 - 10128.291: 8.8197% ( 115) 00:10:56.223 10128.291 - 10187.869: 10.2033% ( 147) 00:10:56.223 10187.869 - 10247.447: 11.8411% ( 174) 00:10:56.223 10247.447 - 10307.025: 13.4883% ( 175) 00:10:56.223 10307.025 - 10366.604: 14.9755% ( 158) 00:10:56.223 10366.604 - 10426.182: 16.3215% ( 143) 00:10:56.223 10426.182 - 10485.760: 17.6393% ( 140) 00:10:56.223 10485.760 - 10545.338: 19.0041% ( 145) 00:10:56.223 10545.338 - 10604.916: 20.3219% ( 140) 00:10:56.223 10604.916 - 10664.495: 21.6209% ( 138) 00:10:56.223 10664.495 - 10724.073: 22.9857% ( 145) 00:10:56.223 10724.073 - 10783.651: 24.3882% ( 149) 00:10:56.223 10783.651 - 10843.229: 25.8377% ( 154) 00:10:56.223 10843.229 - 10902.807: 27.4002% ( 166) 00:10:56.223 10902.807 - 10962.385: 29.0569% ( 176) 00:10:56.223 10962.385 - 11021.964: 30.7323% ( 178) 00:10:56.223 11021.964 - 11081.542: 32.4172% ( 179) 00:10:56.223 11081.542 - 11141.120: 34.1209% ( 181) 00:10:56.223 11141.120 - 11200.698: 35.9563% ( 195) 00:10:56.223 11200.698 - 11260.276: 37.7071% ( 186) 00:10:56.223 11260.276 - 11319.855: 39.4202% ( 182) 00:10:56.223 11319.855 - 11379.433: 41.1803% ( 187) 00:10:56.223 11379.433 - 11439.011: 42.8934% ( 182) 00:10:56.223 11439.011 - 11498.589: 44.7007% ( 192) 00:10:56.223 11498.589 - 11558.167: 46.4703% ( 188) 00:10:56.223 11558.167 - 11617.745: 48.4093% ( 206) 00:10:56.223 11617.745 - 11677.324: 50.1883% ( 189) 00:10:56.223 11677.324 - 11736.902: 51.9861% ( 191) 00:10:56.223 11736.902 - 11796.480: 53.7368% ( 186) 00:10:56.223 11796.480 - 11856.058: 55.4405% ( 181) 00:10:56.223 11856.058 - 11915.636: 57.0124% ( 167) 00:10:56.223 11915.636 - 11975.215: 58.6408% ( 173) 00:10:56.223 11975.215 - 12034.793: 60.3351% ( 180) 00:10:56.223 12034.793 - 12094.371: 61.9447% ( 171) 00:10:56.223 12094.371 - 12153.949: 63.4224% ( 157) 00:10:56.223 12153.949 - 12213.527: 64.9002% ( 157) 00:10:56.223 12213.527 - 12273.105: 66.2462% ( 143) 00:10:56.223 12273.105 - 12332.684: 67.5169% ( 135) 00:10:56.223 12332.684 - 12392.262: 68.8065% ( 137) 00:10:56.223 12392.262 - 12451.840: 69.9925% ( 126) 00:10:56.223 12451.840 - 12511.418: 71.0843% ( 116) 00:10:56.223 12511.418 - 12570.996: 72.1856% ( 117) 00:10:56.223 12570.996 - 12630.575: 73.1928% ( 107) 00:10:56.223 12630.575 - 12690.153: 74.2470% ( 112) 00:10:56.223 12690.153 - 12749.731: 75.2918% ( 111) 00:10:56.223 12749.731 - 12809.309: 76.1578% ( 92) 00:10:56.223 12809.309 - 12868.887: 77.0614% ( 96) 00:10:56.223 12868.887 - 12928.465: 77.9367% ( 93) 00:10:56.223 12928.465 - 12988.044: 78.7745% ( 89) 00:10:56.223 12988.044 - 13047.622: 79.7157% ( 100) 00:10:56.223 13047.622 - 13107.200: 80.5911% ( 93) 00:10:56.223 13107.200 - 13166.778: 81.4571% ( 92) 00:10:56.223 13166.778 - 13226.356: 82.2854% ( 88) 00:10:56.223 13226.356 - 13285.935: 83.1325% ( 90) 00:10:56.223 13285.935 - 13345.513: 83.8950% ( 81) 00:10:56.223 13345.513 - 13405.091: 84.6762% ( 83) 00:10:56.223 13405.091 - 13464.669: 85.2974% ( 66) 00:10:56.223 13464.669 - 13524.247: 85.9187% ( 66) 00:10:56.223 13524.247 - 13583.825: 86.4740% ( 59) 00:10:56.223 13583.825 - 13643.404: 87.0388% ( 60) 00:10:56.223 13643.404 - 13702.982: 87.6035% ( 60) 00:10:56.223 13702.982 - 13762.560: 88.3095% ( 75) 00:10:56.223 13762.560 - 13822.138: 88.9213% ( 65) 00:10:56.223 13822.138 - 13881.716: 89.5143% ( 63) 00:10:56.223 13881.716 - 13941.295: 90.0791% ( 60) 00:10:56.223 13941.295 - 14000.873: 90.6627% ( 62) 00:10:56.223 14000.873 - 14060.451: 91.1898% ( 56) 00:10:56.223 14060.451 - 14120.029: 91.7169% ( 56) 00:10:56.223 14120.029 - 14179.607: 92.2063% ( 52) 00:10:56.223 14179.607 - 14239.185: 92.6864% ( 51) 00:10:56.223 14239.185 - 14298.764: 93.1194% ( 46) 00:10:56.223 14298.764 - 14358.342: 93.5429% ( 45) 00:10:56.223 14358.342 - 14417.920: 93.9665% ( 45) 00:10:56.223 14417.920 - 14477.498: 94.2959% ( 35) 00:10:56.223 14477.498 - 14537.076: 94.6160% ( 34) 00:10:56.223 14537.076 - 14596.655: 94.9548% ( 36) 00:10:56.223 14596.655 - 14656.233: 95.2184% ( 28) 00:10:56.223 14656.233 - 14715.811: 95.4537% ( 25) 00:10:56.223 14715.811 - 14775.389: 95.6514% ( 21) 00:10:56.223 14775.389 - 14834.967: 95.8773% ( 24) 00:10:56.223 14834.967 - 14894.545: 96.1220% ( 26) 00:10:56.223 14894.545 - 14954.124: 96.3291% ( 22) 00:10:56.223 14954.124 - 15013.702: 96.5550% ( 24) 00:10:56.223 15013.702 - 15073.280: 96.7715% ( 23) 00:10:56.223 15073.280 - 15132.858: 97.0068% ( 25) 00:10:56.223 15132.858 - 15192.436: 97.2139% ( 22) 00:10:56.223 15192.436 - 15252.015: 97.4021% ( 20) 00:10:56.223 15252.015 - 15371.171: 97.7598% ( 38) 00:10:56.223 15371.171 - 15490.327: 97.9669% ( 22) 00:10:56.223 15490.327 - 15609.484: 98.1645% ( 21) 00:10:56.223 15609.484 - 15728.640: 98.3622% ( 21) 00:10:56.223 15728.640 - 15847.796: 98.5222% ( 17) 00:10:56.223 15847.796 - 15966.953: 98.6446% ( 13) 00:10:56.223 15966.953 - 16086.109: 98.7293% ( 9) 00:10:56.223 16086.109 - 16205.265: 98.7669% ( 4) 00:10:56.223 16205.265 - 16324.422: 98.7952% ( 3) 00:10:56.223 29908.247 - 30027.404: 98.8046% ( 1) 00:10:56.223 30027.404 - 30146.560: 98.8234% ( 2) 00:10:56.223 30146.560 - 30265.716: 98.8517% ( 3) 00:10:56.223 30265.716 - 30384.873: 98.8799% ( 3) 00:10:56.223 30384.873 - 30504.029: 98.9081% ( 3) 00:10:56.223 30504.029 - 30742.342: 98.9552% ( 5) 00:10:56.223 30742.342 - 30980.655: 99.0117% ( 6) 00:10:56.223 30980.655 - 31218.967: 99.0587% ( 5) 00:10:56.223 31218.967 - 31457.280: 99.1058% ( 5) 00:10:56.223 31457.280 - 31695.593: 99.1529% ( 5) 00:10:56.223 31695.593 - 31933.905: 99.1999% ( 5) 00:10:56.223 31933.905 - 32172.218: 99.2470% ( 5) 00:10:56.223 32172.218 - 32410.531: 99.3035% ( 6) 00:10:56.223 32410.531 - 32648.844: 99.3505% ( 5) 00:10:56.223 32648.844 - 32887.156: 99.3976% ( 5) 00:10:56.223 32887.156 - 33125.469: 99.4447% ( 5) 00:10:56.223 33125.469 - 33363.782: 99.4823% ( 4) 00:10:56.223 33363.782 - 33602.095: 99.5388% ( 6) 00:10:56.223 33602.095 - 33840.407: 99.5953% ( 6) 00:10:56.223 33840.407 - 34078.720: 99.6517% ( 6) 00:10:56.223 34078.720 - 34317.033: 99.7082% ( 6) 00:10:56.223 34317.033 - 34555.345: 99.7553% ( 5) 00:10:56.223 34555.345 - 34793.658: 99.8023% ( 5) 00:10:56.223 34793.658 - 35031.971: 99.8588% ( 6) 00:10:56.223 35031.971 - 35270.284: 99.9153% ( 6) 00:10:56.223 35270.284 - 35508.596: 99.9718% ( 6) 00:10:56.223 35508.596 - 35746.909: 100.0000% ( 3) 00:10:56.223 00:10:56.223 Latency histogram for PCIE (0000:00:09.0) NSID 1 from core 0: 00:10:56.223 ============================================================================== 00:10:56.223 Range in us Cumulative IO count 00:10:56.223 8936.727 - 8996.305: 0.0094% ( 1) 00:10:56.223 8996.305 - 9055.884: 0.1318% ( 13) 00:10:56.223 9055.884 - 9115.462: 0.2447% ( 12) 00:10:56.223 9115.462 - 9175.040: 0.3859% ( 15) 00:10:56.223 9175.040 - 9234.618: 0.6024% ( 23) 00:10:56.223 9234.618 - 9294.196: 0.9130% ( 33) 00:10:56.223 9294.196 - 9353.775: 1.2048% ( 31) 00:10:56.223 9353.775 - 9413.353: 1.4590% ( 27) 00:10:56.223 9413.353 - 9472.931: 1.6849% ( 24) 00:10:56.223 9472.931 - 9532.509: 1.9861% ( 32) 00:10:56.223 9532.509 - 9592.087: 2.3343% ( 37) 00:10:56.223 9592.087 - 9651.665: 2.7767% ( 47) 00:10:56.223 9651.665 - 9711.244: 3.2285% ( 48) 00:10:56.223 9711.244 - 9770.822: 4.0098% ( 83) 00:10:56.223 9770.822 - 9830.400: 4.7910% ( 83) 00:10:56.224 9830.400 - 9889.978: 5.5535% ( 81) 00:10:56.224 9889.978 - 9949.556: 6.3724% ( 87) 00:10:56.224 9949.556 - 10009.135: 7.5489% ( 125) 00:10:56.224 10009.135 - 10068.713: 8.7632% ( 129) 00:10:56.224 10068.713 - 10128.291: 9.9680% ( 128) 00:10:56.224 10128.291 - 10187.869: 11.1728% ( 128) 00:10:56.224 10187.869 - 10247.447: 12.1988% ( 109) 00:10:56.224 10247.447 - 10307.025: 13.4130% ( 129) 00:10:56.224 10307.025 - 10366.604: 14.6178% ( 128) 00:10:56.224 10366.604 - 10426.182: 15.9827% ( 145) 00:10:56.224 10426.182 - 10485.760: 17.3758% ( 148) 00:10:56.224 10485.760 - 10545.338: 18.7218% ( 143) 00:10:56.224 10545.338 - 10604.916: 20.0678% ( 143) 00:10:56.224 10604.916 - 10664.495: 21.3761% ( 139) 00:10:56.224 10664.495 - 10724.073: 22.9292% ( 165) 00:10:56.224 10724.073 - 10783.651: 24.5011% ( 167) 00:10:56.224 10783.651 - 10843.229: 26.0354% ( 163) 00:10:56.224 10843.229 - 10902.807: 27.5979% ( 166) 00:10:56.224 10902.807 - 10962.385: 29.3016% ( 181) 00:10:56.224 10962.385 - 11021.964: 30.9959% ( 180) 00:10:56.224 11021.964 - 11081.542: 32.7090% ( 182) 00:10:56.224 11081.542 - 11141.120: 34.3656% ( 176) 00:10:56.224 11141.120 - 11200.698: 36.1540% ( 190) 00:10:56.224 11200.698 - 11260.276: 37.8294% ( 178) 00:10:56.224 11260.276 - 11319.855: 39.5708% ( 185) 00:10:56.224 11319.855 - 11379.433: 41.2651% ( 180) 00:10:56.224 11379.433 - 11439.011: 43.0441% ( 189) 00:10:56.224 11439.011 - 11498.589: 44.8136% ( 188) 00:10:56.224 11498.589 - 11558.167: 46.5738% ( 187) 00:10:56.224 11558.167 - 11617.745: 48.3245% ( 186) 00:10:56.224 11617.745 - 11677.324: 49.9718% ( 175) 00:10:56.224 11677.324 - 11736.902: 51.6096% ( 174) 00:10:56.224 11736.902 - 11796.480: 53.2380% ( 173) 00:10:56.224 11796.480 - 11856.058: 54.9699% ( 184) 00:10:56.224 11856.058 - 11915.636: 56.5889% ( 172) 00:10:56.224 11915.636 - 11975.215: 58.1137% ( 162) 00:10:56.224 11975.215 - 12034.793: 59.6668% ( 165) 00:10:56.224 12034.793 - 12094.371: 61.2199% ( 165) 00:10:56.224 12094.371 - 12153.949: 62.7730% ( 165) 00:10:56.224 12153.949 - 12213.527: 64.1943% ( 151) 00:10:56.224 12213.527 - 12273.105: 65.5591% ( 145) 00:10:56.224 12273.105 - 12332.684: 66.8581% ( 138) 00:10:56.224 12332.684 - 12392.262: 68.1005% ( 132) 00:10:56.224 12392.262 - 12451.840: 69.1924% ( 116) 00:10:56.224 12451.840 - 12511.418: 70.3125% ( 119) 00:10:56.224 12511.418 - 12570.996: 71.3761% ( 113) 00:10:56.224 12570.996 - 12630.575: 72.4398% ( 113) 00:10:56.224 12630.575 - 12690.153: 73.5505% ( 118) 00:10:56.224 12690.153 - 12749.731: 74.5294% ( 104) 00:10:56.224 12749.731 - 12809.309: 75.4800% ( 101) 00:10:56.224 12809.309 - 12868.887: 76.5060% ( 109) 00:10:56.224 12868.887 - 12928.465: 77.4285% ( 98) 00:10:56.224 12928.465 - 12988.044: 78.3980% ( 103) 00:10:56.224 12988.044 - 13047.622: 79.3769% ( 104) 00:10:56.224 13047.622 - 13107.200: 80.3087% ( 99) 00:10:56.224 13107.200 - 13166.778: 81.2688% ( 102) 00:10:56.224 13166.778 - 13226.356: 82.1348% ( 92) 00:10:56.224 13226.356 - 13285.935: 82.9819% ( 90) 00:10:56.224 13285.935 - 13345.513: 83.8102% ( 88) 00:10:56.224 13345.513 - 13405.091: 84.5727% ( 81) 00:10:56.224 13405.091 - 13464.669: 85.3163% ( 79) 00:10:56.224 13464.669 - 13524.247: 86.0222% ( 75) 00:10:56.224 13524.247 - 13583.825: 86.6999% ( 72) 00:10:56.224 13583.825 - 13643.404: 87.2741% ( 61) 00:10:56.224 13643.404 - 13702.982: 87.8765% ( 64) 00:10:56.224 13702.982 - 13762.560: 88.4601% ( 62) 00:10:56.224 13762.560 - 13822.138: 89.1190% ( 70) 00:10:56.224 13822.138 - 13881.716: 89.7496% ( 67) 00:10:56.224 13881.716 - 13941.295: 90.3803% ( 67) 00:10:56.224 13941.295 - 14000.873: 90.9827% ( 64) 00:10:56.224 14000.873 - 14060.451: 91.6133% ( 67) 00:10:56.224 14060.451 - 14120.029: 92.1498% ( 57) 00:10:56.224 14120.029 - 14179.607: 92.7146% ( 60) 00:10:56.224 14179.607 - 14239.185: 93.1758% ( 49) 00:10:56.224 14239.185 - 14298.764: 93.6653% ( 52) 00:10:56.224 14298.764 - 14358.342: 94.0889% ( 45) 00:10:56.224 14358.342 - 14417.920: 94.5312% ( 47) 00:10:56.224 14417.920 - 14477.498: 94.9454% ( 44) 00:10:56.224 14477.498 - 14537.076: 95.2748% ( 35) 00:10:56.224 14537.076 - 14596.655: 95.6231% ( 37) 00:10:56.224 14596.655 - 14656.233: 95.8867% ( 28) 00:10:56.224 14656.233 - 14715.811: 96.1408% ( 27) 00:10:56.224 14715.811 - 14775.389: 96.3855% ( 26) 00:10:56.224 14775.389 - 14834.967: 96.5738% ( 20) 00:10:56.224 14834.967 - 14894.545: 96.7432% ( 18) 00:10:56.224 14894.545 - 14954.124: 96.9032% ( 17) 00:10:56.224 14954.124 - 15013.702: 97.0633% ( 17) 00:10:56.224 15013.702 - 15073.280: 97.2233% ( 17) 00:10:56.224 15073.280 - 15132.858: 97.3927% ( 18) 00:10:56.224 15132.858 - 15192.436: 97.5527% ( 17) 00:10:56.224 15192.436 - 15252.015: 97.7316% ( 19) 00:10:56.224 15252.015 - 15371.171: 98.0328% ( 32) 00:10:56.224 15371.171 - 15490.327: 98.2398% ( 22) 00:10:56.224 15490.327 - 15609.484: 98.4187% ( 19) 00:10:56.224 15609.484 - 15728.640: 98.5975% ( 19) 00:10:56.224 15728.640 - 15847.796: 98.7011% ( 11) 00:10:56.224 15847.796 - 15966.953: 98.7858% ( 9) 00:10:56.224 15966.953 - 16086.109: 98.7952% ( 1) 00:10:56.224 30027.404 - 30146.560: 98.8046% ( 1) 00:10:56.224 30146.560 - 30265.716: 98.8140% ( 1) 00:10:56.224 30265.716 - 30384.873: 98.8422% ( 3) 00:10:56.224 30384.873 - 30504.029: 98.8611% ( 2) 00:10:56.224 30504.029 - 30742.342: 98.9081% ( 5) 00:10:56.224 30742.342 - 30980.655: 98.9646% ( 6) 00:10:56.224 30980.655 - 31218.967: 99.0117% ( 5) 00:10:56.224 31218.967 - 31457.280: 99.0587% ( 5) 00:10:56.224 31457.280 - 31695.593: 99.1152% ( 6) 00:10:56.224 31695.593 - 31933.905: 99.1529% ( 4) 00:10:56.224 31933.905 - 32172.218: 99.1999% ( 5) 00:10:56.224 32172.218 - 32410.531: 99.2564% ( 6) 00:10:56.224 32410.531 - 32648.844: 99.3129% ( 6) 00:10:56.224 32648.844 - 32887.156: 99.3599% ( 5) 00:10:56.224 32887.156 - 33125.469: 99.4070% ( 5) 00:10:56.224 33125.469 - 33363.782: 99.4635% ( 6) 00:10:56.224 33363.782 - 33602.095: 99.5294% ( 7) 00:10:56.224 33602.095 - 33840.407: 99.5858% ( 6) 00:10:56.224 33840.407 - 34078.720: 99.6517% ( 7) 00:10:56.224 34078.720 - 34317.033: 99.7176% ( 7) 00:10:56.224 34317.033 - 34555.345: 99.7835% ( 7) 00:10:56.224 34555.345 - 34793.658: 99.8494% ( 7) 00:10:56.224 34793.658 - 35031.971: 99.9059% ( 6) 00:10:56.224 35031.971 - 35270.284: 99.9623% ( 6) 00:10:56.224 35270.284 - 35508.596: 100.0000% ( 4) 00:10:56.224 00:10:56.224 Latency histogram for PCIE (0000:00:08.0) NSID 1 from core 0: 00:10:56.224 ============================================================================== 00:10:56.224 Range in us Cumulative IO count 00:10:56.224 8936.727 - 8996.305: 0.0377% ( 4) 00:10:56.224 8996.305 - 9055.884: 0.0753% ( 4) 00:10:56.224 9055.884 - 9115.462: 0.1600% ( 9) 00:10:56.224 9115.462 - 9175.040: 0.3389% ( 19) 00:10:56.224 9175.040 - 9234.618: 0.5271% ( 20) 00:10:56.224 9234.618 - 9294.196: 0.7530% ( 24) 00:10:56.224 9294.196 - 9353.775: 1.0825% ( 35) 00:10:56.224 9353.775 - 9413.353: 1.4590% ( 40) 00:10:56.224 9413.353 - 9472.931: 1.9014% ( 47) 00:10:56.224 9472.931 - 9532.509: 2.2967% ( 42) 00:10:56.224 9532.509 - 9592.087: 2.7297% ( 46) 00:10:56.224 9592.087 - 9651.665: 3.1815% ( 48) 00:10:56.224 9651.665 - 9711.244: 3.7556% ( 61) 00:10:56.224 9711.244 - 9770.822: 4.2922% ( 57) 00:10:56.224 9770.822 - 9830.400: 4.8946% ( 64) 00:10:56.224 9830.400 - 9889.978: 5.5629% ( 71) 00:10:56.224 9889.978 - 9949.556: 6.3065% ( 79) 00:10:56.224 9949.556 - 10009.135: 7.2948% ( 105) 00:10:56.224 10009.135 - 10068.713: 8.4149% ( 119) 00:10:56.224 10068.713 - 10128.291: 9.5068% ( 116) 00:10:56.224 10128.291 - 10187.869: 10.5986% ( 116) 00:10:56.224 10187.869 - 10247.447: 11.9164% ( 140) 00:10:56.224 10247.447 - 10307.025: 13.0836% ( 124) 00:10:56.224 10307.025 - 10366.604: 14.3543% ( 135) 00:10:56.224 10366.604 - 10426.182: 15.4744% ( 119) 00:10:56.224 10426.182 - 10485.760: 16.7545% ( 136) 00:10:56.224 10485.760 - 10545.338: 18.2794% ( 162) 00:10:56.224 10545.338 - 10604.916: 19.8042% ( 162) 00:10:56.224 10604.916 - 10664.495: 21.3479% ( 164) 00:10:56.224 10664.495 - 10724.073: 22.9951% ( 175) 00:10:56.224 10724.073 - 10783.651: 24.6611% ( 177) 00:10:56.224 10783.651 - 10843.229: 26.3084% ( 175) 00:10:56.224 10843.229 - 10902.807: 27.7861% ( 157) 00:10:56.224 10902.807 - 10962.385: 29.3581% ( 167) 00:10:56.224 10962.385 - 11021.964: 30.9959% ( 174) 00:10:56.224 11021.964 - 11081.542: 32.6242% ( 173) 00:10:56.224 11081.542 - 11141.120: 34.3185% ( 180) 00:10:56.224 11141.120 - 11200.698: 36.0881% ( 188) 00:10:56.224 11200.698 - 11260.276: 37.8294% ( 185) 00:10:56.224 11260.276 - 11319.855: 39.6461% ( 193) 00:10:56.224 11319.855 - 11379.433: 41.4062% ( 187) 00:10:56.224 11379.433 - 11439.011: 43.1758% ( 188) 00:10:56.224 11439.011 - 11498.589: 44.9454% ( 188) 00:10:56.224 11498.589 - 11558.167: 46.6114% ( 177) 00:10:56.224 11558.167 - 11617.745: 48.2022% ( 169) 00:10:56.224 11617.745 - 11677.324: 49.8682% ( 177) 00:10:56.224 11677.324 - 11736.902: 51.6378% ( 188) 00:10:56.224 11736.902 - 11796.480: 53.3133% ( 178) 00:10:56.224 11796.480 - 11856.058: 55.0546% ( 185) 00:10:56.224 11856.058 - 11915.636: 56.6453% ( 169) 00:10:56.224 11915.636 - 11975.215: 58.1702% ( 162) 00:10:56.224 11975.215 - 12034.793: 59.6668% ( 159) 00:10:56.224 12034.793 - 12094.371: 61.1916% ( 162) 00:10:56.224 12094.371 - 12153.949: 62.6506% ( 155) 00:10:56.224 12153.949 - 12213.527: 64.1002% ( 154) 00:10:56.224 12213.527 - 12273.105: 65.4273% ( 141) 00:10:56.224 12273.105 - 12332.684: 66.6792% ( 133) 00:10:56.224 12332.684 - 12392.262: 67.8558% ( 125) 00:10:56.224 12392.262 - 12451.840: 69.0418% ( 126) 00:10:56.224 12451.840 - 12511.418: 70.3407% ( 138) 00:10:56.224 12511.418 - 12570.996: 71.6114% ( 135) 00:10:56.224 12570.996 - 12630.575: 72.7033% ( 116) 00:10:56.224 12630.575 - 12690.153: 73.7387% ( 110) 00:10:56.224 12690.153 - 12749.731: 74.7176% ( 104) 00:10:56.224 12749.731 - 12809.309: 75.7059% ( 105) 00:10:56.224 12809.309 - 12868.887: 76.6849% ( 104) 00:10:56.225 12868.887 - 12928.465: 77.5414% ( 91) 00:10:56.225 12928.465 - 12988.044: 78.4168% ( 93) 00:10:56.225 12988.044 - 13047.622: 79.2357% ( 87) 00:10:56.225 13047.622 - 13107.200: 80.0358% ( 85) 00:10:56.225 13107.200 - 13166.778: 80.9206% ( 94) 00:10:56.225 13166.778 - 13226.356: 81.7959% ( 93) 00:10:56.225 13226.356 - 13285.935: 82.5019% ( 75) 00:10:56.225 13285.935 - 13345.513: 83.2737% ( 82) 00:10:56.225 13345.513 - 13405.091: 84.0832% ( 86) 00:10:56.225 13405.091 - 13464.669: 84.8550% ( 82) 00:10:56.225 13464.669 - 13524.247: 85.5798% ( 77) 00:10:56.225 13524.247 - 13583.825: 86.2481% ( 71) 00:10:56.225 13583.825 - 13643.404: 86.9635% ( 76) 00:10:56.225 13643.404 - 13702.982: 87.5847% ( 66) 00:10:56.225 13702.982 - 13762.560: 88.2624% ( 72) 00:10:56.225 13762.560 - 13822.138: 88.9401% ( 72) 00:10:56.225 13822.138 - 13881.716: 89.5331% ( 63) 00:10:56.225 13881.716 - 13941.295: 90.0508% ( 55) 00:10:56.225 13941.295 - 14000.873: 90.6156% ( 60) 00:10:56.225 14000.873 - 14060.451: 91.1615% ( 58) 00:10:56.225 14060.451 - 14120.029: 91.6886% ( 56) 00:10:56.225 14120.029 - 14179.607: 92.1875% ( 53) 00:10:56.225 14179.607 - 14239.185: 92.7240% ( 57) 00:10:56.225 14239.185 - 14298.764: 93.1758% ( 48) 00:10:56.225 14298.764 - 14358.342: 93.6088% ( 46) 00:10:56.225 14358.342 - 14417.920: 94.0700% ( 49) 00:10:56.225 14417.920 - 14477.498: 94.4465% ( 40) 00:10:56.225 14477.498 - 14537.076: 94.8607% ( 44) 00:10:56.225 14537.076 - 14596.655: 95.1901% ( 35) 00:10:56.225 14596.655 - 14656.233: 95.4819% ( 31) 00:10:56.225 14656.233 - 14715.811: 95.7267% ( 26) 00:10:56.225 14715.811 - 14775.389: 95.9526% ( 24) 00:10:56.225 14775.389 - 14834.967: 96.1408% ( 20) 00:10:56.225 14834.967 - 14894.545: 96.3102% ( 18) 00:10:56.225 14894.545 - 14954.124: 96.4891% ( 19) 00:10:56.225 14954.124 - 15013.702: 96.6773% ( 20) 00:10:56.225 15013.702 - 15073.280: 96.8562% ( 19) 00:10:56.225 15073.280 - 15132.858: 97.0068% ( 16) 00:10:56.225 15132.858 - 15192.436: 97.1291% ( 13) 00:10:56.225 15192.436 - 15252.015: 97.2609% ( 14) 00:10:56.225 15252.015 - 15371.171: 97.5245% ( 28) 00:10:56.225 15371.171 - 15490.327: 97.7880% ( 28) 00:10:56.225 15490.327 - 15609.484: 97.9857% ( 21) 00:10:56.225 15609.484 - 15728.640: 98.1269% ( 15) 00:10:56.225 15728.640 - 15847.796: 98.2681% ( 15) 00:10:56.225 15847.796 - 15966.953: 98.3622% ( 10) 00:10:56.225 15966.953 - 16086.109: 98.4187% ( 6) 00:10:56.225 16086.109 - 16205.265: 98.4563% ( 4) 00:10:56.225 16205.265 - 16324.422: 98.4846% ( 3) 00:10:56.225 16324.422 - 16443.578: 98.5222% ( 4) 00:10:56.225 16443.578 - 16562.735: 98.5505% ( 3) 00:10:56.225 16562.735 - 16681.891: 98.5881% ( 4) 00:10:56.225 16681.891 - 16801.047: 98.6258% ( 4) 00:10:56.225 16801.047 - 16920.204: 98.6540% ( 3) 00:10:56.225 16920.204 - 17039.360: 98.6916% ( 4) 00:10:56.225 17039.360 - 17158.516: 98.7293% ( 4) 00:10:56.225 17158.516 - 17277.673: 98.7575% ( 3) 00:10:56.225 17277.673 - 17396.829: 98.7952% ( 4) 00:10:56.225 28716.684 - 28835.840: 98.8046% ( 1) 00:10:56.225 28835.840 - 28954.996: 98.8328% ( 3) 00:10:56.225 28954.996 - 29074.153: 98.8611% ( 3) 00:10:56.225 29074.153 - 29193.309: 98.8893% ( 3) 00:10:56.225 29193.309 - 29312.465: 98.9270% ( 4) 00:10:56.225 29312.465 - 29431.622: 98.9552% ( 3) 00:10:56.225 29431.622 - 29550.778: 98.9834% ( 3) 00:10:56.225 29550.778 - 29669.935: 99.0117% ( 3) 00:10:56.225 29669.935 - 29789.091: 99.0399% ( 3) 00:10:56.225 29789.091 - 29908.247: 99.0776% ( 4) 00:10:56.225 29908.247 - 30027.404: 99.1058% ( 3) 00:10:56.225 30027.404 - 30146.560: 99.1340% ( 3) 00:10:56.225 30146.560 - 30265.716: 99.1623% ( 3) 00:10:56.225 30265.716 - 30384.873: 99.1999% ( 4) 00:10:56.225 30384.873 - 30504.029: 99.2282% ( 3) 00:10:56.225 30504.029 - 30742.342: 99.2846% ( 6) 00:10:56.225 30742.342 - 30980.655: 99.3505% ( 7) 00:10:56.225 30980.655 - 31218.967: 99.4070% ( 6) 00:10:56.225 31218.967 - 31457.280: 99.4729% ( 7) 00:10:56.225 31457.280 - 31695.593: 99.5294% ( 6) 00:10:56.225 31695.593 - 31933.905: 99.5953% ( 7) 00:10:56.225 31933.905 - 32172.218: 99.6517% ( 6) 00:10:56.225 32172.218 - 32410.531: 99.7176% ( 7) 00:10:56.225 32410.531 - 32648.844: 99.7647% ( 5) 00:10:56.225 32648.844 - 32887.156: 99.8306% ( 7) 00:10:56.225 32887.156 - 33125.469: 99.8870% ( 6) 00:10:56.225 33125.469 - 33363.782: 99.9435% ( 6) 00:10:56.225 33363.782 - 33602.095: 100.0000% ( 6) 00:10:56.225 00:10:56.225 Latency histogram for PCIE (0000:00:08.0) NSID 2 from core 0: 00:10:56.225 ============================================================================== 00:10:56.225 Range in us Cumulative IO count 00:10:56.225 8936.727 - 8996.305: 0.0282% ( 3) 00:10:56.225 8996.305 - 9055.884: 0.0565% ( 3) 00:10:56.225 9055.884 - 9115.462: 0.1694% ( 12) 00:10:56.225 9115.462 - 9175.040: 0.3294% ( 17) 00:10:56.225 9175.040 - 9234.618: 0.4989% ( 18) 00:10:56.225 9234.618 - 9294.196: 0.7342% ( 25) 00:10:56.225 9294.196 - 9353.775: 0.9601% ( 24) 00:10:56.225 9353.775 - 9413.353: 1.1766% ( 23) 00:10:56.225 9413.353 - 9472.931: 1.4119% ( 25) 00:10:56.225 9472.931 - 9532.509: 1.6849% ( 29) 00:10:56.225 9532.509 - 9592.087: 1.9955% ( 33) 00:10:56.225 9592.087 - 9651.665: 2.5320% ( 57) 00:10:56.225 9651.665 - 9711.244: 3.3227% ( 84) 00:10:56.225 9711.244 - 9770.822: 4.1604% ( 89) 00:10:56.225 9770.822 - 9830.400: 5.0640% ( 96) 00:10:56.225 9830.400 - 9889.978: 5.9488% ( 94) 00:10:56.225 9889.978 - 9949.556: 6.7865% ( 89) 00:10:56.225 9949.556 - 10009.135: 7.6337% ( 90) 00:10:56.225 10009.135 - 10068.713: 8.5843% ( 101) 00:10:56.225 10068.713 - 10128.291: 9.6574% ( 114) 00:10:56.225 10128.291 - 10187.869: 10.8340% ( 125) 00:10:56.225 10187.869 - 10247.447: 12.0764% ( 132) 00:10:56.225 10247.447 - 10307.025: 13.3377% ( 134) 00:10:56.225 10307.025 - 10366.604: 14.6084% ( 135) 00:10:56.225 10366.604 - 10426.182: 15.9074% ( 138) 00:10:56.225 10426.182 - 10485.760: 17.1687% ( 134) 00:10:56.225 10485.760 - 10545.338: 18.6370% ( 156) 00:10:56.225 10545.338 - 10604.916: 20.0866% ( 154) 00:10:56.225 10604.916 - 10664.495: 21.6114% ( 162) 00:10:56.225 10664.495 - 10724.073: 23.1834% ( 167) 00:10:56.225 10724.073 - 10783.651: 24.8117% ( 173) 00:10:56.225 10783.651 - 10843.229: 26.4778% ( 177) 00:10:56.225 10843.229 - 10902.807: 28.1156% ( 174) 00:10:56.225 10902.807 - 10962.385: 29.6028% ( 158) 00:10:56.225 10962.385 - 11021.964: 31.2029% ( 170) 00:10:56.225 11021.964 - 11081.542: 32.8407% ( 174) 00:10:56.225 11081.542 - 11141.120: 34.5162% ( 178) 00:10:56.225 11141.120 - 11200.698: 36.3799% ( 198) 00:10:56.225 11200.698 - 11260.276: 38.1871% ( 192) 00:10:56.225 11260.276 - 11319.855: 40.0414% ( 197) 00:10:56.225 11319.855 - 11379.433: 41.8204% ( 189) 00:10:56.225 11379.433 - 11439.011: 43.5335% ( 182) 00:10:56.225 11439.011 - 11498.589: 45.1337% ( 170) 00:10:56.225 11498.589 - 11558.167: 46.9032% ( 188) 00:10:56.225 11558.167 - 11617.745: 48.6258% ( 183) 00:10:56.225 11617.745 - 11677.324: 50.5177% ( 201) 00:10:56.225 11677.324 - 11736.902: 52.2684% ( 186) 00:10:56.225 11736.902 - 11796.480: 54.0286% ( 187) 00:10:56.225 11796.480 - 11856.058: 55.7323% ( 181) 00:10:56.225 11856.058 - 11915.636: 57.2383% ( 160) 00:10:56.225 11915.636 - 11975.215: 58.7726% ( 163) 00:10:56.225 11975.215 - 12034.793: 60.3163% ( 164) 00:10:56.225 12034.793 - 12094.371: 61.8411% ( 162) 00:10:56.225 12094.371 - 12153.949: 63.3377% ( 159) 00:10:56.225 12153.949 - 12213.527: 64.7779% ( 153) 00:10:56.225 12213.527 - 12273.105: 66.0015% ( 130) 00:10:56.225 12273.105 - 12332.684: 67.2816% ( 136) 00:10:56.225 12332.684 - 12392.262: 68.5241% ( 132) 00:10:56.225 12392.262 - 12451.840: 69.7289% ( 128) 00:10:56.225 12451.840 - 12511.418: 70.9055% ( 125) 00:10:56.225 12511.418 - 12570.996: 72.1103% ( 128) 00:10:56.225 12570.996 - 12630.575: 73.0986% ( 105) 00:10:56.225 12630.575 - 12690.153: 74.1811% ( 115) 00:10:56.225 12690.153 - 12749.731: 75.2071% ( 109) 00:10:56.225 12749.731 - 12809.309: 76.1860% ( 104) 00:10:56.225 12809.309 - 12868.887: 77.1461% ( 102) 00:10:56.225 12868.887 - 12928.465: 78.0215% ( 93) 00:10:56.225 12928.465 - 12988.044: 78.7651% ( 79) 00:10:56.225 12988.044 - 13047.622: 79.5275% ( 81) 00:10:56.225 13047.622 - 13107.200: 80.2805% ( 80) 00:10:56.225 13107.200 - 13166.778: 81.0335% ( 80) 00:10:56.225 13166.778 - 13226.356: 81.7677% ( 78) 00:10:56.225 13226.356 - 13285.935: 82.5019% ( 78) 00:10:56.225 13285.935 - 13345.513: 83.1984% ( 74) 00:10:56.225 13345.513 - 13405.091: 83.9326% ( 78) 00:10:56.225 13405.091 - 13464.669: 84.6574% ( 77) 00:10:56.225 13464.669 - 13524.247: 85.4010% ( 79) 00:10:56.225 13524.247 - 13583.825: 86.0599% ( 70) 00:10:56.225 13583.825 - 13643.404: 86.6717% ( 65) 00:10:56.225 13643.404 - 13702.982: 87.3117% ( 68) 00:10:56.225 13702.982 - 13762.560: 87.8671% ( 59) 00:10:56.225 13762.560 - 13822.138: 88.3942% ( 56) 00:10:56.225 13822.138 - 13881.716: 88.9119% ( 55) 00:10:56.225 13881.716 - 13941.295: 89.4202% ( 54) 00:10:56.225 13941.295 - 14000.873: 89.9096% ( 52) 00:10:56.225 14000.873 - 14060.451: 90.4367% ( 56) 00:10:56.225 14060.451 - 14120.029: 90.9921% ( 59) 00:10:56.225 14120.029 - 14179.607: 91.5474% ( 59) 00:10:56.225 14179.607 - 14239.185: 92.0557% ( 54) 00:10:56.225 14239.185 - 14298.764: 92.5734% ( 55) 00:10:56.225 14298.764 - 14358.342: 92.9876% ( 44) 00:10:56.225 14358.342 - 14417.920: 93.4488% ( 49) 00:10:56.225 14417.920 - 14477.498: 93.8535% ( 43) 00:10:56.225 14477.498 - 14537.076: 94.2206% ( 39) 00:10:56.225 14537.076 - 14596.655: 94.5407% ( 34) 00:10:56.225 14596.655 - 14656.233: 94.7948% ( 27) 00:10:56.225 14656.233 - 14715.811: 95.0489% ( 27) 00:10:56.225 14715.811 - 14775.389: 95.2843% ( 25) 00:10:56.225 14775.389 - 14834.967: 95.5384% ( 27) 00:10:56.225 14834.967 - 14894.545: 95.7455% ( 22) 00:10:56.225 14894.545 - 14954.124: 95.9808% ( 25) 00:10:56.225 14954.124 - 15013.702: 96.1691% ( 20) 00:10:56.225 15013.702 - 15073.280: 96.3291% ( 17) 00:10:56.225 15073.280 - 15132.858: 96.5079% ( 19) 00:10:56.226 15132.858 - 15192.436: 96.6679% ( 17) 00:10:56.226 15192.436 - 15252.015: 96.8091% ( 15) 00:10:56.226 15252.015 - 15371.171: 97.1197% ( 33) 00:10:56.226 15371.171 - 15490.327: 97.3362% ( 23) 00:10:56.226 15490.327 - 15609.484: 97.5339% ( 21) 00:10:56.226 15609.484 - 15728.640: 97.7033% ( 18) 00:10:56.226 15728.640 - 15847.796: 97.8257% ( 13) 00:10:56.226 15847.796 - 15966.953: 97.9292% ( 11) 00:10:56.226 15966.953 - 16086.109: 98.0139% ( 9) 00:10:56.226 16086.109 - 16205.265: 98.0516% ( 4) 00:10:56.226 16205.265 - 16324.422: 98.0798% ( 3) 00:10:56.226 16324.422 - 16443.578: 98.1081% ( 3) 00:10:56.226 16443.578 - 16562.735: 98.1457% ( 4) 00:10:56.226 16562.735 - 16681.891: 98.1834% ( 4) 00:10:56.226 16681.891 - 16801.047: 98.2210% ( 4) 00:10:56.226 16801.047 - 16920.204: 98.2492% ( 3) 00:10:56.226 16920.204 - 17039.360: 98.2869% ( 4) 00:10:56.226 17039.360 - 17158.516: 98.3245% ( 4) 00:10:56.226 17158.516 - 17277.673: 98.3528% ( 3) 00:10:56.226 17277.673 - 17396.829: 98.3904% ( 4) 00:10:56.226 17396.829 - 17515.985: 98.4281% ( 4) 00:10:56.226 17515.985 - 17635.142: 98.4657% ( 4) 00:10:56.226 17635.142 - 17754.298: 98.5034% ( 4) 00:10:56.226 17754.298 - 17873.455: 98.5410% ( 4) 00:10:56.226 17873.455 - 17992.611: 98.5693% ( 3) 00:10:56.226 17992.611 - 18111.767: 98.6069% ( 4) 00:10:56.226 18111.767 - 18230.924: 98.6446% ( 4) 00:10:56.226 18230.924 - 18350.080: 98.6728% ( 3) 00:10:56.226 18350.080 - 18469.236: 98.7105% ( 4) 00:10:56.226 18469.236 - 18588.393: 98.7387% ( 3) 00:10:56.226 18588.393 - 18707.549: 98.7764% ( 4) 00:10:56.226 18707.549 - 18826.705: 98.7952% ( 2) 00:10:56.226 27525.120 - 27644.276: 98.8046% ( 1) 00:10:56.226 27644.276 - 27763.433: 98.8422% ( 4) 00:10:56.226 27763.433 - 27882.589: 98.8705% ( 3) 00:10:56.226 27882.589 - 28001.745: 98.8987% ( 3) 00:10:56.226 28001.745 - 28120.902: 98.9270% ( 3) 00:10:56.226 28120.902 - 28240.058: 98.9646% ( 4) 00:10:56.226 28240.058 - 28359.215: 98.9928% ( 3) 00:10:56.226 28359.215 - 28478.371: 99.0211% ( 3) 00:10:56.226 28478.371 - 28597.527: 99.0587% ( 4) 00:10:56.226 28597.527 - 28716.684: 99.0964% ( 4) 00:10:56.226 28716.684 - 28835.840: 99.1246% ( 3) 00:10:56.226 28835.840 - 28954.996: 99.1529% ( 3) 00:10:56.226 28954.996 - 29074.153: 99.1811% ( 3) 00:10:56.226 29074.153 - 29193.309: 99.2093% ( 3) 00:10:56.226 29193.309 - 29312.465: 99.2376% ( 3) 00:10:56.226 29312.465 - 29431.622: 99.2658% ( 3) 00:10:56.226 29431.622 - 29550.778: 99.3035% ( 4) 00:10:56.226 29550.778 - 29669.935: 99.3317% ( 3) 00:10:56.226 29669.935 - 29789.091: 99.3599% ( 3) 00:10:56.226 29789.091 - 29908.247: 99.3882% ( 3) 00:10:56.226 29908.247 - 30027.404: 99.4258% ( 4) 00:10:56.226 30027.404 - 30146.560: 99.4541% ( 3) 00:10:56.226 30146.560 - 30265.716: 99.4823% ( 3) 00:10:56.226 30265.716 - 30384.873: 99.5105% ( 3) 00:10:56.226 30384.873 - 30504.029: 99.5388% ( 3) 00:10:56.226 30504.029 - 30742.342: 99.5953% ( 6) 00:10:56.226 30742.342 - 30980.655: 99.6611% ( 7) 00:10:56.226 30980.655 - 31218.967: 99.7176% ( 6) 00:10:56.226 31218.967 - 31457.280: 99.7835% ( 7) 00:10:56.226 31457.280 - 31695.593: 99.8494% ( 7) 00:10:56.226 31695.593 - 31933.905: 99.9059% ( 6) 00:10:56.226 31933.905 - 32172.218: 99.9718% ( 7) 00:10:56.226 32172.218 - 32410.531: 100.0000% ( 3) 00:10:56.226 00:10:56.226 Latency histogram for PCIE (0000:00:08.0) NSID 3 from core 0: 00:10:56.226 ============================================================================== 00:10:56.226 Range in us Cumulative IO count 00:10:56.226 8757.993 - 8817.571: 0.0094% ( 1) 00:10:56.226 8817.571 - 8877.149: 0.0188% ( 1) 00:10:56.226 8877.149 - 8936.727: 0.0282% ( 1) 00:10:56.226 8936.727 - 8996.305: 0.1130% ( 9) 00:10:56.226 8996.305 - 9055.884: 0.2636% ( 16) 00:10:56.226 9055.884 - 9115.462: 0.3671% ( 11) 00:10:56.226 9115.462 - 9175.040: 0.4895% ( 13) 00:10:56.226 9175.040 - 9234.618: 0.7154% ( 24) 00:10:56.226 9234.618 - 9294.196: 0.9883% ( 29) 00:10:56.226 9294.196 - 9353.775: 1.2801% ( 31) 00:10:56.226 9353.775 - 9413.353: 1.5813% ( 32) 00:10:56.226 9413.353 - 9472.931: 1.8919% ( 33) 00:10:56.226 9472.931 - 9532.509: 2.1743% ( 30) 00:10:56.226 9532.509 - 9592.087: 2.5038% ( 35) 00:10:56.226 9592.087 - 9651.665: 2.9650% ( 49) 00:10:56.226 9651.665 - 9711.244: 3.6521% ( 73) 00:10:56.226 9711.244 - 9770.822: 4.3486% ( 74) 00:10:56.226 9770.822 - 9830.400: 5.1299% ( 83) 00:10:56.226 9830.400 - 9889.978: 5.8547% ( 77) 00:10:56.226 9889.978 - 9949.556: 6.5889% ( 78) 00:10:56.226 9949.556 - 10009.135: 7.4925% ( 96) 00:10:56.226 10009.135 - 10068.713: 8.4996% ( 107) 00:10:56.226 10068.713 - 10128.291: 9.5444% ( 111) 00:10:56.226 10128.291 - 10187.869: 10.7304% ( 126) 00:10:56.226 10187.869 - 10247.447: 11.9635% ( 131) 00:10:56.226 10247.447 - 10307.025: 13.2718% ( 139) 00:10:56.226 10307.025 - 10366.604: 14.5896% ( 140) 00:10:56.226 10366.604 - 10426.182: 16.0862% ( 159) 00:10:56.226 10426.182 - 10485.760: 17.5075% ( 151) 00:10:56.226 10485.760 - 10545.338: 18.9194% ( 150) 00:10:56.226 10545.338 - 10604.916: 20.4160% ( 159) 00:10:56.226 10604.916 - 10664.495: 21.8750% ( 155) 00:10:56.226 10664.495 - 10724.073: 23.5128% ( 174) 00:10:56.226 10724.073 - 10783.651: 25.1694% ( 176) 00:10:56.226 10783.651 - 10843.229: 26.9014% ( 184) 00:10:56.226 10843.229 - 10902.807: 28.4356% ( 163) 00:10:56.226 10902.807 - 10962.385: 29.9511% ( 161) 00:10:56.226 10962.385 - 11021.964: 31.6265% ( 178) 00:10:56.226 11021.964 - 11081.542: 33.3678% ( 185) 00:10:56.226 11081.542 - 11141.120: 35.0056% ( 174) 00:10:56.226 11141.120 - 11200.698: 36.5023% ( 159) 00:10:56.226 11200.698 - 11260.276: 38.2342% ( 184) 00:10:56.226 11260.276 - 11319.855: 40.0226% ( 190) 00:10:56.226 11319.855 - 11379.433: 41.8675% ( 196) 00:10:56.226 11379.433 - 11439.011: 43.7406% ( 199) 00:10:56.226 11439.011 - 11498.589: 45.6325% ( 201) 00:10:56.226 11498.589 - 11558.167: 47.3362% ( 181) 00:10:56.226 11558.167 - 11617.745: 49.1152% ( 189) 00:10:56.226 11617.745 - 11677.324: 50.8566% ( 185) 00:10:56.226 11677.324 - 11736.902: 52.5414% ( 179) 00:10:56.226 11736.902 - 11796.480: 54.1792% ( 174) 00:10:56.226 11796.480 - 11856.058: 55.8264% ( 175) 00:10:56.226 11856.058 - 11915.636: 57.3607% ( 163) 00:10:56.226 11915.636 - 11975.215: 58.9138% ( 165) 00:10:56.226 11975.215 - 12034.793: 60.4575% ( 164) 00:10:56.226 12034.793 - 12094.371: 61.9164% ( 155) 00:10:56.226 12094.371 - 12153.949: 63.3754% ( 155) 00:10:56.226 12153.949 - 12213.527: 64.7402% ( 145) 00:10:56.226 12213.527 - 12273.105: 66.1615% ( 151) 00:10:56.226 12273.105 - 12332.684: 67.4981% ( 142) 00:10:56.226 12332.684 - 12392.262: 68.7312% ( 131) 00:10:56.226 12392.262 - 12451.840: 69.8419% ( 118) 00:10:56.226 12451.840 - 12511.418: 70.9620% ( 119) 00:10:56.226 12511.418 - 12570.996: 72.1480% ( 126) 00:10:56.226 12570.996 - 12630.575: 73.2492% ( 117) 00:10:56.226 12630.575 - 12690.153: 74.2564% ( 107) 00:10:56.226 12690.153 - 12749.731: 75.2259% ( 103) 00:10:56.226 12749.731 - 12809.309: 76.2048% ( 104) 00:10:56.226 12809.309 - 12868.887: 77.1084% ( 96) 00:10:56.226 12868.887 - 12928.465: 78.0215% ( 97) 00:10:56.226 12928.465 - 12988.044: 78.8498% ( 88) 00:10:56.226 12988.044 - 13047.622: 79.7440% ( 95) 00:10:56.226 13047.622 - 13107.200: 80.5535% ( 86) 00:10:56.226 13107.200 - 13166.778: 81.2877% ( 78) 00:10:56.226 13166.778 - 13226.356: 81.9748% ( 73) 00:10:56.226 13226.356 - 13285.935: 82.6807% ( 75) 00:10:56.226 13285.935 - 13345.513: 83.3584% ( 72) 00:10:56.226 13345.513 - 13405.091: 83.9985% ( 68) 00:10:56.226 13405.091 - 13464.669: 84.5915% ( 63) 00:10:56.226 13464.669 - 13524.247: 85.1186% ( 56) 00:10:56.226 13524.247 - 13583.825: 85.6834% ( 60) 00:10:56.226 13583.825 - 13643.404: 86.2387% ( 59) 00:10:56.226 13643.404 - 13702.982: 86.8505% ( 65) 00:10:56.226 13702.982 - 13762.560: 87.4247% ( 61) 00:10:56.226 13762.560 - 13822.138: 87.9895% ( 60) 00:10:56.226 13822.138 - 13881.716: 88.5730% ( 62) 00:10:56.226 13881.716 - 13941.295: 89.0907% ( 55) 00:10:56.227 13941.295 - 14000.873: 89.6931% ( 64) 00:10:56.227 14000.873 - 14060.451: 90.2203% ( 56) 00:10:56.227 14060.451 - 14120.029: 90.7380% ( 55) 00:10:56.227 14120.029 - 14179.607: 91.2462% ( 54) 00:10:56.227 14179.607 - 14239.185: 91.7545% ( 54) 00:10:56.227 14239.185 - 14298.764: 92.2157% ( 49) 00:10:56.227 14298.764 - 14358.342: 92.6393% ( 45) 00:10:56.227 14358.342 - 14417.920: 93.1005% ( 49) 00:10:56.227 14417.920 - 14477.498: 93.4770% ( 40) 00:10:56.227 14477.498 - 14537.076: 93.8159% ( 36) 00:10:56.227 14537.076 - 14596.655: 94.1453% ( 35) 00:10:56.227 14596.655 - 14656.233: 94.4748% ( 35) 00:10:56.227 14656.233 - 14715.811: 94.7760% ( 32) 00:10:56.227 14715.811 - 14775.389: 95.0584% ( 30) 00:10:56.227 14775.389 - 14834.967: 95.2937% ( 25) 00:10:56.227 14834.967 - 14894.545: 95.5102% ( 23) 00:10:56.227 14894.545 - 14954.124: 95.7078% ( 21) 00:10:56.227 14954.124 - 15013.702: 95.9055% ( 21) 00:10:56.227 15013.702 - 15073.280: 96.1032% ( 21) 00:10:56.227 15073.280 - 15132.858: 96.2632% ( 17) 00:10:56.227 15132.858 - 15192.436: 96.4326% ( 18) 00:10:56.227 15192.436 - 15252.015: 96.5550% ( 13) 00:10:56.227 15252.015 - 15371.171: 96.8373% ( 30) 00:10:56.227 15371.171 - 15490.327: 97.1103% ( 29) 00:10:56.227 15490.327 - 15609.484: 97.2421% ( 14) 00:10:56.227 15609.484 - 15728.640: 97.2986% ( 6) 00:10:56.227 15728.640 - 15847.796: 97.3927% ( 10) 00:10:56.227 15847.796 - 15966.953: 97.5151% ( 13) 00:10:56.227 15966.953 - 16086.109: 97.6374% ( 13) 00:10:56.227 16086.109 - 16205.265: 97.7127% ( 8) 00:10:56.227 16205.265 - 16324.422: 97.7786% ( 7) 00:10:56.227 16324.422 - 16443.578: 97.8163% ( 4) 00:10:56.227 16443.578 - 16562.735: 97.8539% ( 4) 00:10:56.227 16562.735 - 16681.891: 97.8822% ( 3) 00:10:56.227 16681.891 - 16801.047: 97.9198% ( 4) 00:10:56.227 16801.047 - 16920.204: 97.9480% ( 3) 00:10:56.227 17635.142 - 17754.298: 97.9575% ( 1) 00:10:56.227 17754.298 - 17873.455: 97.9951% ( 4) 00:10:56.227 17873.455 - 17992.611: 98.0233% ( 3) 00:10:56.227 17992.611 - 18111.767: 98.0516% ( 3) 00:10:56.227 18111.767 - 18230.924: 98.0798% ( 3) 00:10:56.227 18230.924 - 18350.080: 98.1081% ( 3) 00:10:56.227 18350.080 - 18469.236: 98.1457% ( 4) 00:10:56.227 18469.236 - 18588.393: 98.1834% ( 4) 00:10:56.227 18588.393 - 18707.549: 98.2116% ( 3) 00:10:56.227 18707.549 - 18826.705: 98.2492% ( 4) 00:10:56.227 18826.705 - 18945.862: 98.2869% ( 4) 00:10:56.227 18945.862 - 19065.018: 98.3151% ( 3) 00:10:56.227 19065.018 - 19184.175: 98.3528% ( 4) 00:10:56.227 19184.175 - 19303.331: 98.3810% ( 3) 00:10:56.227 19303.331 - 19422.487: 98.4187% ( 4) 00:10:56.227 19422.487 - 19541.644: 98.4469% ( 3) 00:10:56.227 19541.644 - 19660.800: 98.4752% ( 3) 00:10:56.227 19660.800 - 19779.956: 98.5128% ( 4) 00:10:56.227 19779.956 - 19899.113: 98.5410% ( 3) 00:10:56.227 19899.113 - 20018.269: 98.5787% ( 4) 00:10:56.227 20018.269 - 20137.425: 98.6163% ( 4) 00:10:56.227 20137.425 - 20256.582: 98.6446% ( 3) 00:10:56.227 20256.582 - 20375.738: 98.6728% ( 3) 00:10:56.227 20375.738 - 20494.895: 98.7105% ( 4) 00:10:56.227 20494.895 - 20614.051: 98.7387% ( 3) 00:10:56.227 20614.051 - 20733.207: 98.7764% ( 4) 00:10:56.227 20733.207 - 20852.364: 98.7952% ( 2) 00:10:56.227 26929.338 - 27048.495: 98.8517% ( 6) 00:10:56.227 27048.495 - 27167.651: 99.1152% ( 28) 00:10:56.227 27167.651 - 27286.807: 99.1340% ( 2) 00:10:56.227 27286.807 - 27405.964: 99.1623% ( 3) 00:10:56.227 27405.964 - 27525.120: 99.1905% ( 3) 00:10:56.227 27525.120 - 27644.276: 99.2093% ( 2) 00:10:56.227 27644.276 - 27763.433: 99.2376% ( 3) 00:10:56.227 27763.433 - 27882.589: 99.2658% ( 3) 00:10:56.227 27882.589 - 28001.745: 99.2941% ( 3) 00:10:56.227 28001.745 - 28120.902: 99.3129% ( 2) 00:10:56.227 28120.902 - 28240.058: 99.3411% ( 3) 00:10:56.227 28240.058 - 28359.215: 99.3694% ( 3) 00:10:56.227 28359.215 - 28478.371: 99.3976% ( 3) 00:10:56.227 28478.371 - 28597.527: 99.4258% ( 3) 00:10:56.227 28597.527 - 28716.684: 99.4541% ( 3) 00:10:56.227 28716.684 - 28835.840: 99.4917% ( 4) 00:10:56.227 28835.840 - 28954.996: 99.5105% ( 2) 00:10:56.227 28954.996 - 29074.153: 99.5388% ( 3) 00:10:56.227 29074.153 - 29193.309: 99.5670% ( 3) 00:10:56.227 29193.309 - 29312.465: 99.5953% ( 3) 00:10:56.227 29312.465 - 29431.622: 99.6235% ( 3) 00:10:56.227 29431.622 - 29550.778: 99.6423% ( 2) 00:10:56.227 29550.778 - 29669.935: 99.6706% ( 3) 00:10:56.227 29669.935 - 29789.091: 99.6988% ( 3) 00:10:56.227 29789.091 - 29908.247: 99.7270% ( 3) 00:10:56.227 29908.247 - 30027.404: 99.7647% ( 4) 00:10:56.227 30027.404 - 30146.560: 99.7929% ( 3) 00:10:56.227 30146.560 - 30265.716: 99.8212% ( 3) 00:10:56.227 30265.716 - 30384.873: 99.8494% ( 3) 00:10:56.227 30384.873 - 30504.029: 99.8776% ( 3) 00:10:56.227 30504.029 - 30742.342: 99.9435% ( 7) 00:10:56.227 30742.342 - 30980.655: 100.0000% ( 6) 00:10:56.227 00:10:56.227 04:51:03 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:10:56.227 00:10:56.227 real 0m2.868s 00:10:56.227 user 0m2.461s 00:10:56.227 sys 0m0.291s 00:10:56.227 04:51:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:56.227 ************************************ 00:10:56.227 END TEST nvme_perf 00:10:56.227 04:51:03 -- common/autotest_common.sh@10 -- # set +x 00:10:56.227 ************************************ 00:10:56.486 04:51:03 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:10:56.486 04:51:03 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:10:56.486 04:51:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:56.486 04:51:03 -- common/autotest_common.sh@10 -- # set +x 00:10:56.486 ************************************ 00:10:56.486 START TEST nvme_hello_world 00:10:56.486 ************************************ 00:10:56.486 04:51:03 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:10:56.744 Initializing NVMe Controllers 00:10:56.744 Attached to 0000:00:06.0 00:10:56.744 Namespace ID: 1 size: 6GB 00:10:56.744 Attached to 0000:00:07.0 00:10:56.744 Namespace ID: 1 size: 5GB 00:10:56.744 Attached to 0000:00:09.0 00:10:56.744 Namespace ID: 1 size: 1GB 00:10:56.744 Attached to 0000:00:08.0 00:10:56.744 Namespace ID: 1 size: 4GB 00:10:56.744 Namespace ID: 2 size: 4GB 00:10:56.744 Namespace ID: 3 size: 4GB 00:10:56.744 Initialization complete. 00:10:56.744 INFO: using host memory buffer for IO 00:10:56.744 Hello world! 00:10:56.744 INFO: using host memory buffer for IO 00:10:56.744 Hello world! 00:10:56.744 INFO: using host memory buffer for IO 00:10:56.745 Hello world! 00:10:56.745 INFO: using host memory buffer for IO 00:10:56.745 Hello world! 00:10:56.745 INFO: using host memory buffer for IO 00:10:56.745 Hello world! 00:10:56.745 INFO: using host memory buffer for IO 00:10:56.745 Hello world! 00:10:56.745 00:10:56.745 real 0m0.388s 00:10:56.745 user 0m0.206s 00:10:56.745 sys 0m0.136s 00:10:56.745 04:51:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:56.745 04:51:03 -- common/autotest_common.sh@10 -- # set +x 00:10:56.745 ************************************ 00:10:56.745 END TEST nvme_hello_world 00:10:56.745 ************************************ 00:10:56.745 04:51:03 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:10:56.745 04:51:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:56.745 04:51:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:56.745 04:51:03 -- common/autotest_common.sh@10 -- # set +x 00:10:56.745 ************************************ 00:10:56.745 START TEST nvme_sgl 00:10:56.745 ************************************ 00:10:56.745 04:51:03 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:10:57.003 0000:00:06.0: build_io_request_0 Invalid IO length parameter 00:10:57.003 0000:00:06.0: build_io_request_1 Invalid IO length parameter 00:10:57.003 0000:00:06.0: build_io_request_3 Invalid IO length parameter 00:10:57.262 0000:00:06.0: build_io_request_8 Invalid IO length parameter 00:10:57.262 0000:00:06.0: build_io_request_9 Invalid IO length parameter 00:10:57.262 0000:00:06.0: build_io_request_11 Invalid IO length parameter 00:10:57.262 0000:00:07.0: build_io_request_0 Invalid IO length parameter 00:10:57.262 0000:00:07.0: build_io_request_1 Invalid IO length parameter 00:10:57.262 0000:00:07.0: build_io_request_3 Invalid IO length parameter 00:10:57.262 0000:00:07.0: build_io_request_8 Invalid IO length parameter 00:10:57.262 0000:00:07.0: build_io_request_9 Invalid IO length parameter 00:10:57.262 0000:00:07.0: build_io_request_11 Invalid IO length parameter 00:10:57.262 0000:00:09.0: build_io_request_0 Invalid IO length parameter 00:10:57.262 0000:00:09.0: build_io_request_1 Invalid IO length parameter 00:10:57.262 0000:00:09.0: build_io_request_2 Invalid IO length parameter 00:10:57.262 0000:00:09.0: build_io_request_3 Invalid IO length parameter 00:10:57.262 0000:00:09.0: build_io_request_4 Invalid IO length parameter 00:10:57.262 0000:00:09.0: build_io_request_5 Invalid IO length parameter 00:10:57.262 0000:00:09.0: build_io_request_6 Invalid IO length parameter 00:10:57.262 0000:00:09.0: build_io_request_7 Invalid IO length parameter 00:10:57.262 0000:00:09.0: build_io_request_8 Invalid IO length parameter 00:10:57.262 0000:00:09.0: build_io_request_9 Invalid IO length parameter 00:10:57.262 0000:00:09.0: build_io_request_10 Invalid IO length parameter 00:10:57.262 0000:00:09.0: build_io_request_11 Invalid IO length parameter 00:10:57.262 0000:00:08.0: build_io_request_0 Invalid IO length parameter 00:10:57.262 0000:00:08.0: build_io_request_1 Invalid IO length parameter 00:10:57.262 0000:00:08.0: build_io_request_2 Invalid IO length parameter 00:10:57.262 0000:00:08.0: build_io_request_3 Invalid IO length parameter 00:10:57.262 0000:00:08.0: build_io_request_4 Invalid IO length parameter 00:10:57.262 0000:00:08.0: build_io_request_5 Invalid IO length parameter 00:10:57.262 0000:00:08.0: build_io_request_6 Invalid IO length parameter 00:10:57.262 0000:00:08.0: build_io_request_7 Invalid IO length parameter 00:10:57.262 0000:00:08.0: build_io_request_8 Invalid IO length parameter 00:10:57.262 0000:00:08.0: build_io_request_9 Invalid IO length parameter 00:10:57.262 0000:00:08.0: build_io_request_10 Invalid IO length parameter 00:10:57.262 0000:00:08.0: build_io_request_11 Invalid IO length parameter 00:10:57.262 NVMe Readv/Writev Request test 00:10:57.262 Attached to 0000:00:06.0 00:10:57.262 Attached to 0000:00:07.0 00:10:57.262 Attached to 0000:00:09.0 00:10:57.262 Attached to 0000:00:08.0 00:10:57.262 0000:00:06.0: build_io_request_2 test passed 00:10:57.262 0000:00:06.0: build_io_request_4 test passed 00:10:57.262 0000:00:06.0: build_io_request_5 test passed 00:10:57.262 0000:00:06.0: build_io_request_6 test passed 00:10:57.262 0000:00:06.0: build_io_request_7 test passed 00:10:57.262 0000:00:06.0: build_io_request_10 test passed 00:10:57.262 0000:00:07.0: build_io_request_2 test passed 00:10:57.262 0000:00:07.0: build_io_request_4 test passed 00:10:57.262 0000:00:07.0: build_io_request_5 test passed 00:10:57.262 0000:00:07.0: build_io_request_6 test passed 00:10:57.262 0000:00:07.0: build_io_request_7 test passed 00:10:57.262 0000:00:07.0: build_io_request_10 test passed 00:10:57.262 Cleaning up... 00:10:57.262 00:10:57.262 real 0m0.559s 00:10:57.262 user 0m0.368s 00:10:57.262 sys 0m0.143s 00:10:57.262 04:51:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:57.262 04:51:04 -- common/autotest_common.sh@10 -- # set +x 00:10:57.262 ************************************ 00:10:57.262 END TEST nvme_sgl 00:10:57.262 ************************************ 00:10:57.521 04:51:04 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:10:57.521 04:51:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:57.521 04:51:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:57.521 04:51:04 -- common/autotest_common.sh@10 -- # set +x 00:10:57.521 ************************************ 00:10:57.521 START TEST nvme_e2edp 00:10:57.521 ************************************ 00:10:57.521 04:51:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:10:57.781 NVMe Write/Read with End-to-End data protection test 00:10:57.781 Attached to 0000:00:06.0 00:10:57.781 Attached to 0000:00:07.0 00:10:57.781 Attached to 0000:00:09.0 00:10:57.781 Attached to 0000:00:08.0 00:10:57.781 Cleaning up... 00:10:57.781 00:10:57.781 real 0m0.289s 00:10:57.781 user 0m0.114s 00:10:57.781 sys 0m0.128s 00:10:57.781 04:51:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:57.781 04:51:04 -- common/autotest_common.sh@10 -- # set +x 00:10:57.781 ************************************ 00:10:57.781 END TEST nvme_e2edp 00:10:57.781 ************************************ 00:10:57.781 04:51:04 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:10:57.781 04:51:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:57.781 04:51:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:57.781 04:51:04 -- common/autotest_common.sh@10 -- # set +x 00:10:57.781 ************************************ 00:10:57.781 START TEST nvme_reserve 00:10:57.781 ************************************ 00:10:57.781 04:51:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:10:58.049 ===================================================== 00:10:58.049 NVMe Controller at PCI bus 0, device 6, function 0 00:10:58.049 ===================================================== 00:10:58.049 Reservations: Not Supported 00:10:58.049 ===================================================== 00:10:58.049 NVMe Controller at PCI bus 0, device 7, function 0 00:10:58.049 ===================================================== 00:10:58.049 Reservations: Not Supported 00:10:58.049 ===================================================== 00:10:58.049 NVMe Controller at PCI bus 0, device 9, function 0 00:10:58.049 ===================================================== 00:10:58.049 Reservations: Not Supported 00:10:58.049 ===================================================== 00:10:58.049 NVMe Controller at PCI bus 0, device 8, function 0 00:10:58.049 ===================================================== 00:10:58.049 Reservations: Not Supported 00:10:58.049 Reservation test passed 00:10:58.049 00:10:58.049 real 0m0.287s 00:10:58.049 user 0m0.121s 00:10:58.049 sys 0m0.121s 00:10:58.049 04:51:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:58.049 04:51:05 -- common/autotest_common.sh@10 -- # set +x 00:10:58.049 ************************************ 00:10:58.049 END TEST nvme_reserve 00:10:58.049 ************************************ 00:10:58.049 04:51:05 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:10:58.049 04:51:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:58.049 04:51:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:58.049 04:51:05 -- common/autotest_common.sh@10 -- # set +x 00:10:58.049 ************************************ 00:10:58.049 START TEST nvme_err_injection 00:10:58.049 ************************************ 00:10:58.049 04:51:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:10:58.314 NVMe Error Injection test 00:10:58.314 Attached to 0000:00:06.0 00:10:58.314 Attached to 0000:00:07.0 00:10:58.314 Attached to 0000:00:09.0 00:10:58.314 Attached to 0000:00:08.0 00:10:58.314 0000:00:08.0: get features failed as expected 00:10:58.314 0000:00:06.0: get features failed as expected 00:10:58.314 0000:00:07.0: get features failed as expected 00:10:58.314 0000:00:09.0: get features failed as expected 00:10:58.314 0000:00:08.0: get features successfully as expected 00:10:58.314 0000:00:06.0: get features successfully as expected 00:10:58.314 0000:00:07.0: get features successfully as expected 00:10:58.314 0000:00:09.0: get features successfully as expected 00:10:58.314 0000:00:06.0: read failed as expected 00:10:58.314 0000:00:07.0: read failed as expected 00:10:58.314 0000:00:09.0: read failed as expected 00:10:58.314 0000:00:08.0: read failed as expected 00:10:58.314 0000:00:06.0: read successfully as expected 00:10:58.314 0000:00:07.0: read successfully as expected 00:10:58.314 0000:00:09.0: read successfully as expected 00:10:58.314 0000:00:08.0: read successfully as expected 00:10:58.314 Cleaning up... 00:10:58.572 00:10:58.572 real 0m0.362s 00:10:58.572 user 0m0.175s 00:10:58.572 sys 0m0.143s 00:10:58.572 04:51:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:58.572 04:51:05 -- common/autotest_common.sh@10 -- # set +x 00:10:58.572 ************************************ 00:10:58.572 END TEST nvme_err_injection 00:10:58.572 ************************************ 00:10:58.572 04:51:05 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:10:58.572 04:51:05 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:10:58.572 04:51:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:58.572 04:51:05 -- common/autotest_common.sh@10 -- # set +x 00:10:58.572 ************************************ 00:10:58.572 START TEST nvme_overhead 00:10:58.572 ************************************ 00:10:58.573 04:51:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:10:59.950 Initializing NVMe Controllers 00:10:59.951 Attached to 0000:00:06.0 00:10:59.951 Attached to 0000:00:07.0 00:10:59.951 Attached to 0000:00:09.0 00:10:59.951 Attached to 0000:00:08.0 00:10:59.951 Initialization complete. Launching workers. 00:10:59.951 submit (in ns) avg, min, max = 16439.6, 13164.5, 76349.1 00:10:59.951 complete (in ns) avg, min, max = 11469.7, 8710.9, 71752.7 00:10:59.951 00:10:59.951 Submit histogram 00:10:59.951 ================ 00:10:59.951 Range in us Cumulative Count 00:10:59.951 13.149 - 13.207: 0.0110% ( 1) 00:10:59.951 13.324 - 13.382: 0.0770% ( 6) 00:10:59.951 13.382 - 13.440: 0.1870% ( 10) 00:10:59.951 13.440 - 13.498: 0.4290% ( 22) 00:10:59.951 13.498 - 13.556: 0.8141% ( 35) 00:10:59.951 13.556 - 13.615: 1.0781% ( 24) 00:10:59.951 13.615 - 13.673: 1.4411% ( 33) 00:10:59.951 13.673 - 13.731: 1.9032% ( 42) 00:10:59.951 13.731 - 13.789: 2.9373% ( 94) 00:10:59.951 13.789 - 13.847: 4.7085% ( 161) 00:10:59.951 13.847 - 13.905: 7.0957% ( 217) 00:10:59.951 13.905 - 13.964: 10.2420% ( 286) 00:10:59.951 13.964 - 14.022: 13.3443% ( 282) 00:10:59.951 14.022 - 14.080: 15.8966% ( 232) 00:10:59.951 14.080 - 14.138: 17.7778% ( 171) 00:10:59.951 14.138 - 14.196: 20.0660% ( 208) 00:10:59.951 14.196 - 14.255: 23.3333% ( 297) 00:10:59.951 14.255 - 14.313: 27.6788% ( 395) 00:10:59.951 14.313 - 14.371: 32.4532% ( 434) 00:10:59.951 14.371 - 14.429: 36.5897% ( 376) 00:10:59.951 14.429 - 14.487: 39.6920% ( 282) 00:10:59.951 14.487 - 14.545: 42.2772% ( 235) 00:10:59.951 14.545 - 14.604: 44.3894% ( 192) 00:10:59.951 14.604 - 14.662: 46.3146% ( 175) 00:10:59.951 14.662 - 14.720: 48.5369% ( 202) 00:10:59.951 14.720 - 14.778: 51.3311% ( 254) 00:10:59.951 14.778 - 14.836: 54.4004% ( 279) 00:10:59.951 14.836 - 14.895: 56.8867% ( 226) 00:10:59.951 14.895 - 15.011: 61.3201% ( 403) 00:10:59.951 15.011 - 15.127: 64.2024% ( 262) 00:10:59.951 15.127 - 15.244: 66.0836% ( 171) 00:10:59.951 15.244 - 15.360: 67.4367% ( 123) 00:10:59.951 15.360 - 15.476: 68.8889% ( 132) 00:10:59.951 15.476 - 15.593: 70.0660% ( 107) 00:10:59.951 15.593 - 15.709: 70.9131% ( 77) 00:10:59.951 15.709 - 15.825: 71.6612% ( 68) 00:10:59.951 15.825 - 15.942: 72.1342% ( 43) 00:10:59.951 15.942 - 16.058: 72.6513% ( 47) 00:10:59.951 16.058 - 16.175: 72.9373% ( 26) 00:10:59.951 16.175 - 16.291: 73.0913% ( 14) 00:10:59.951 16.291 - 16.407: 73.2123% ( 11) 00:10:59.951 16.407 - 16.524: 73.2453% ( 3) 00:10:59.951 16.524 - 16.640: 73.3553% ( 10) 00:10:59.951 16.640 - 16.756: 73.4213% ( 6) 00:10:59.951 16.756 - 16.873: 73.5094% ( 8) 00:10:59.951 16.873 - 16.989: 73.5534% ( 4) 00:10:59.951 16.989 - 17.105: 73.5864% ( 3) 00:10:59.951 17.105 - 17.222: 73.9604% ( 34) 00:10:59.951 17.222 - 17.338: 76.3696% ( 219) 00:10:59.951 17.338 - 17.455: 80.3080% ( 358) 00:10:59.951 17.455 - 17.571: 83.1133% ( 255) 00:10:59.951 17.571 - 17.687: 84.4884% ( 125) 00:10:59.951 17.687 - 17.804: 85.0715% ( 53) 00:10:59.951 17.804 - 17.920: 85.4125% ( 31) 00:10:59.951 17.920 - 18.036: 85.7426% ( 30) 00:10:59.951 18.036 - 18.153: 86.4136% ( 61) 00:10:59.951 18.153 - 18.269: 87.1727% ( 69) 00:10:59.951 18.269 - 18.385: 87.8328% ( 60) 00:10:59.951 18.385 - 18.502: 88.2398% ( 37) 00:10:59.951 18.502 - 18.618: 88.5479% ( 28) 00:10:59.951 18.618 - 18.735: 88.6249% ( 7) 00:10:59.951 18.735 - 18.851: 88.7239% ( 9) 00:10:59.951 18.851 - 18.967: 88.7679% ( 4) 00:10:59.951 18.967 - 19.084: 88.8229% ( 5) 00:10:59.951 19.084 - 19.200: 88.9439% ( 11) 00:10:59.951 19.200 - 19.316: 88.9879% ( 4) 00:10:59.951 19.316 - 19.433: 89.1089% ( 11) 00:10:59.951 19.433 - 19.549: 89.2079% ( 9) 00:10:59.951 19.549 - 19.665: 89.3069% ( 9) 00:10:59.951 19.665 - 19.782: 89.4059% ( 9) 00:10:59.951 19.782 - 19.898: 89.4939% ( 8) 00:10:59.951 19.898 - 20.015: 89.6260% ( 12) 00:10:59.951 20.015 - 20.131: 89.7910% ( 15) 00:10:59.951 20.131 - 20.247: 89.9560% ( 15) 00:10:59.951 20.247 - 20.364: 90.0550% ( 9) 00:10:59.951 20.364 - 20.480: 90.1210% ( 6) 00:10:59.951 20.480 - 20.596: 90.2310% ( 10) 00:10:59.951 20.596 - 20.713: 90.2970% ( 6) 00:10:59.951 20.713 - 20.829: 90.4070% ( 10) 00:10:59.951 20.829 - 20.945: 90.5061% ( 9) 00:10:59.951 20.945 - 21.062: 90.6931% ( 17) 00:10:59.951 21.062 - 21.178: 90.7591% ( 6) 00:10:59.951 21.178 - 21.295: 90.8691% ( 10) 00:10:59.951 21.295 - 21.411: 90.9461% ( 7) 00:10:59.951 21.411 - 21.527: 91.0121% ( 6) 00:10:59.951 21.527 - 21.644: 91.1001% ( 8) 00:10:59.951 21.644 - 21.760: 91.1661% ( 6) 00:10:59.951 21.760 - 21.876: 91.2431% ( 7) 00:10:59.951 21.876 - 21.993: 91.3091% ( 6) 00:10:59.951 21.993 - 22.109: 91.3751% ( 6) 00:10:59.951 22.109 - 22.225: 91.4741% ( 9) 00:10:59.951 22.225 - 22.342: 91.5292% ( 5) 00:10:59.951 22.342 - 22.458: 91.5622% ( 3) 00:10:59.951 22.458 - 22.575: 91.5952% ( 3) 00:10:59.951 22.575 - 22.691: 91.6392% ( 4) 00:10:59.951 22.691 - 22.807: 91.6832% ( 4) 00:10:59.951 22.807 - 22.924: 91.7382% ( 5) 00:10:59.951 22.924 - 23.040: 91.8372% ( 9) 00:10:59.951 23.040 - 23.156: 91.8482% ( 1) 00:10:59.951 23.156 - 23.273: 91.9032% ( 5) 00:10:59.951 23.273 - 23.389: 91.9252% ( 2) 00:10:59.951 23.389 - 23.505: 91.9912% ( 6) 00:10:59.951 23.505 - 23.622: 92.0572% ( 6) 00:10:59.951 23.622 - 23.738: 92.1342% ( 7) 00:10:59.951 23.738 - 23.855: 92.2002% ( 6) 00:10:59.951 23.855 - 23.971: 92.2882% ( 8) 00:10:59.951 23.971 - 24.087: 92.3212% ( 3) 00:10:59.951 24.087 - 24.204: 92.3652% ( 4) 00:10:59.951 24.204 - 24.320: 92.4092% ( 4) 00:10:59.951 24.320 - 24.436: 92.4642% ( 5) 00:10:59.951 24.436 - 24.553: 92.5193% ( 5) 00:10:59.951 24.553 - 24.669: 92.5633% ( 4) 00:10:59.951 24.669 - 24.785: 92.5853% ( 2) 00:10:59.951 24.785 - 24.902: 92.5963% ( 1) 00:10:59.951 24.902 - 25.018: 92.6293% ( 3) 00:10:59.951 25.018 - 25.135: 92.6623% ( 3) 00:10:59.951 25.135 - 25.251: 92.6953% ( 3) 00:10:59.951 25.251 - 25.367: 92.7173% ( 2) 00:10:59.951 25.367 - 25.484: 92.7503% ( 3) 00:10:59.951 25.484 - 25.600: 92.7943% ( 4) 00:10:59.951 25.600 - 25.716: 92.8383% ( 4) 00:10:59.951 25.716 - 25.833: 92.8933% ( 5) 00:10:59.951 25.833 - 25.949: 92.9593% ( 6) 00:10:59.951 25.949 - 26.065: 93.0033% ( 4) 00:10:59.951 26.065 - 26.182: 93.0253% ( 2) 00:10:59.951 26.182 - 26.298: 93.0693% ( 4) 00:10:59.951 26.298 - 26.415: 93.1133% ( 4) 00:10:59.951 26.415 - 26.531: 93.1793% ( 6) 00:10:59.951 26.531 - 26.647: 93.2013% ( 2) 00:10:59.951 26.647 - 26.764: 93.2453% ( 4) 00:10:59.951 26.764 - 26.880: 93.2563% ( 1) 00:10:59.951 26.880 - 26.996: 93.3223% ( 6) 00:10:59.951 26.996 - 27.113: 93.3553% ( 3) 00:10:59.951 27.113 - 27.229: 93.3663% ( 1) 00:10:59.951 27.229 - 27.345: 93.3993% ( 3) 00:10:59.951 27.345 - 27.462: 93.4213% ( 2) 00:10:59.951 27.462 - 27.578: 93.4653% ( 4) 00:10:59.951 27.578 - 27.695: 93.4873% ( 2) 00:10:59.951 27.695 - 27.811: 93.4983% ( 1) 00:10:59.951 27.811 - 27.927: 93.5204% ( 2) 00:10:59.951 27.927 - 28.044: 93.5424% ( 2) 00:10:59.951 28.044 - 28.160: 93.5754% ( 3) 00:10:59.951 28.160 - 28.276: 93.5974% ( 2) 00:10:59.951 28.276 - 28.393: 93.6304% ( 3) 00:10:59.951 28.393 - 28.509: 93.7514% ( 11) 00:10:59.951 28.509 - 28.625: 93.9384% ( 17) 00:10:59.951 28.625 - 28.742: 94.1694% ( 21) 00:10:59.951 28.742 - 28.858: 94.4884% ( 29) 00:10:59.951 28.858 - 28.975: 94.7965% ( 28) 00:10:59.951 28.975 - 29.091: 95.1485% ( 32) 00:10:59.951 29.091 - 29.207: 95.5556% ( 37) 00:10:59.951 29.207 - 29.324: 96.1606% ( 55) 00:10:59.951 29.324 - 29.440: 96.7877% ( 57) 00:10:59.951 29.440 - 29.556: 97.3707% ( 53) 00:10:59.951 29.556 - 29.673: 97.7338% ( 33) 00:10:59.951 29.673 - 29.789: 97.9648% ( 21) 00:10:59.951 29.789 - 30.022: 98.3058% ( 31) 00:10:59.951 30.022 - 30.255: 98.5919% ( 26) 00:10:59.951 30.255 - 30.487: 98.7129% ( 11) 00:10:59.951 30.487 - 30.720: 98.7899% ( 7) 00:10:59.951 30.720 - 30.953: 98.8119% ( 2) 00:10:59.951 30.953 - 31.185: 98.8669% ( 5) 00:10:59.951 31.185 - 31.418: 98.8779% ( 1) 00:10:59.951 31.418 - 31.651: 98.9219% ( 4) 00:10:59.951 31.651 - 31.884: 98.9439% ( 2) 00:10:59.951 31.884 - 32.116: 98.9879% ( 4) 00:10:59.951 32.116 - 32.349: 98.9989% ( 1) 00:10:59.951 32.349 - 32.582: 99.0209% ( 2) 00:10:59.951 32.582 - 32.815: 99.0429% ( 2) 00:10:59.951 32.815 - 33.047: 99.0759% ( 3) 00:10:59.951 33.280 - 33.513: 99.0869% ( 1) 00:10:59.951 33.513 - 33.745: 99.1089% ( 2) 00:10:59.951 33.745 - 33.978: 99.1199% ( 1) 00:10:59.951 33.978 - 34.211: 99.1309% ( 1) 00:10:59.951 34.211 - 34.444: 99.1749% ( 4) 00:10:59.951 34.444 - 34.676: 99.1859% ( 1) 00:10:59.951 34.676 - 34.909: 99.2079% ( 2) 00:10:59.951 34.909 - 35.142: 99.2739% ( 6) 00:10:59.951 35.142 - 35.375: 99.3289% ( 5) 00:10:59.951 35.375 - 35.607: 99.3619% ( 3) 00:10:59.951 35.607 - 35.840: 99.3729% ( 1) 00:10:59.951 35.840 - 36.073: 99.3949% ( 2) 00:10:59.952 36.073 - 36.305: 99.4389% ( 4) 00:10:59.952 36.305 - 36.538: 99.4829% ( 4) 00:10:59.952 36.538 - 36.771: 99.5600% ( 7) 00:10:59.952 36.771 - 37.004: 99.6040% ( 4) 00:10:59.952 37.004 - 37.236: 99.6370% ( 3) 00:10:59.952 37.236 - 37.469: 99.6480% ( 1) 00:10:59.952 37.702 - 37.935: 99.6590% ( 1) 00:10:59.952 37.935 - 38.167: 99.6700% ( 1) 00:10:59.952 38.167 - 38.400: 99.6810% ( 1) 00:10:59.952 38.400 - 38.633: 99.6920% ( 1) 00:10:59.952 38.633 - 38.865: 99.7030% ( 1) 00:10:59.952 39.098 - 39.331: 99.7140% ( 1) 00:10:59.952 39.796 - 40.029: 99.7580% ( 4) 00:10:59.952 40.029 - 40.262: 99.7910% ( 3) 00:10:59.952 40.727 - 40.960: 99.8020% ( 1) 00:10:59.952 40.960 - 41.193: 99.8130% ( 1) 00:10:59.952 41.658 - 41.891: 99.8240% ( 1) 00:10:59.952 42.124 - 42.356: 99.8350% ( 1) 00:10:59.952 44.218 - 44.451: 99.8570% ( 2) 00:10:59.952 44.451 - 44.684: 99.8680% ( 1) 00:10:59.952 44.684 - 44.916: 99.8900% ( 2) 00:10:59.952 46.080 - 46.313: 99.9010% ( 1) 00:10:59.952 46.313 - 46.545: 99.9120% ( 1) 00:10:59.952 47.011 - 47.244: 99.9230% ( 1) 00:10:59.952 51.898 - 52.131: 99.9340% ( 1) 00:10:59.952 54.691 - 54.924: 99.9450% ( 1) 00:10:59.952 56.087 - 56.320: 99.9670% ( 2) 00:10:59.952 69.353 - 69.818: 99.9780% ( 1) 00:10:59.952 73.076 - 73.542: 99.9890% ( 1) 00:10:59.952 76.335 - 76.800: 100.0000% ( 1) 00:10:59.952 00:10:59.952 Complete histogram 00:10:59.952 ================== 00:10:59.952 Range in us Cumulative Count 00:10:59.952 8.669 - 8.727: 0.0220% ( 2) 00:10:59.952 8.727 - 8.785: 0.0440% ( 2) 00:10:59.952 8.785 - 8.844: 0.1100% ( 6) 00:10:59.952 8.844 - 8.902: 0.2310% ( 11) 00:10:59.952 8.902 - 8.960: 0.4180% ( 17) 00:10:59.952 8.960 - 9.018: 1.0781% ( 60) 00:10:59.952 9.018 - 9.076: 2.1452% ( 97) 00:10:59.952 9.076 - 9.135: 4.2794% ( 194) 00:10:59.952 9.135 - 9.193: 6.4136% ( 194) 00:10:59.952 9.193 - 9.251: 8.4928% ( 189) 00:10:59.952 9.251 - 9.309: 10.2640% ( 161) 00:10:59.952 9.309 - 9.367: 13.3553% ( 281) 00:10:59.952 9.367 - 9.425: 18.3938% ( 458) 00:10:59.952 9.425 - 9.484: 24.0374% ( 513) 00:10:59.952 9.484 - 9.542: 28.2288% ( 381) 00:10:59.952 9.542 - 9.600: 31.2211% ( 272) 00:10:59.952 9.600 - 9.658: 33.9934% ( 252) 00:10:59.952 9.658 - 9.716: 38.1188% ( 375) 00:10:59.952 9.716 - 9.775: 44.0814% ( 542) 00:10:59.952 9.775 - 9.833: 49.5050% ( 493) 00:10:59.952 9.833 - 9.891: 54.0924% ( 417) 00:10:59.952 9.891 - 9.949: 57.1067% ( 274) 00:10:59.952 9.949 - 10.007: 58.9109% ( 164) 00:10:59.952 10.007 - 10.065: 60.2090% ( 118) 00:10:59.952 10.065 - 10.124: 61.5732% ( 124) 00:10:59.952 10.124 - 10.182: 62.5963% ( 93) 00:10:59.952 10.182 - 10.240: 63.5204% ( 84) 00:10:59.952 10.240 - 10.298: 64.1914% ( 61) 00:10:59.952 10.298 - 10.356: 64.7305% ( 49) 00:10:59.952 10.356 - 10.415: 65.3025% ( 52) 00:10:59.952 10.415 - 10.473: 65.8526% ( 50) 00:10:59.952 10.473 - 10.531: 66.3036% ( 41) 00:10:59.952 10.531 - 10.589: 66.6887% ( 35) 00:10:59.952 10.589 - 10.647: 67.0407% ( 32) 00:10:59.952 10.647 - 10.705: 67.3927% ( 32) 00:10:59.952 10.705 - 10.764: 67.7338% ( 31) 00:10:59.952 10.764 - 10.822: 68.0968% ( 33) 00:10:59.952 10.822 - 10.880: 68.4378% ( 31) 00:10:59.952 10.880 - 10.938: 68.9109% ( 43) 00:10:59.952 10.938 - 10.996: 69.1859% ( 25) 00:10:59.952 10.996 - 11.055: 69.5050% ( 29) 00:10:59.952 11.055 - 11.113: 69.6590% ( 14) 00:10:59.952 11.113 - 11.171: 69.8460% ( 17) 00:10:59.952 11.171 - 11.229: 69.9560% ( 10) 00:10:59.952 11.229 - 11.287: 70.1430% ( 17) 00:10:59.952 11.287 - 11.345: 70.2640% ( 11) 00:10:59.952 11.345 - 11.404: 70.5391% ( 25) 00:10:59.952 11.404 - 11.462: 70.7481% ( 19) 00:10:59.952 11.462 - 11.520: 70.9351% ( 17) 00:10:59.952 11.520 - 11.578: 71.2321% ( 27) 00:10:59.952 11.578 - 11.636: 71.4851% ( 23) 00:10:59.952 11.636 - 11.695: 71.9802% ( 45) 00:10:59.952 11.695 - 11.753: 73.3443% ( 124) 00:10:59.952 11.753 - 11.811: 76.0946% ( 250) 00:10:59.952 11.811 - 11.869: 79.0759% ( 271) 00:10:59.952 11.869 - 11.927: 82.0022% ( 266) 00:10:59.952 11.927 - 11.985: 84.0484% ( 186) 00:10:59.952 11.985 - 12.044: 85.0495% ( 91) 00:10:59.952 12.044 - 12.102: 85.6216% ( 52) 00:10:59.952 12.102 - 12.160: 85.9406% ( 29) 00:10:59.952 12.160 - 12.218: 86.1166% ( 16) 00:10:59.952 12.218 - 12.276: 86.3916% ( 25) 00:10:59.952 12.276 - 12.335: 86.5347% ( 13) 00:10:59.952 12.335 - 12.393: 86.6997% ( 15) 00:10:59.952 12.393 - 12.451: 86.8757% ( 16) 00:10:59.952 12.451 - 12.509: 86.9637% ( 8) 00:10:59.952 12.509 - 12.567: 87.1287% ( 15) 00:10:59.952 12.567 - 12.625: 87.3487% ( 20) 00:10:59.952 12.625 - 12.684: 87.5798% ( 21) 00:10:59.952 12.684 - 12.742: 87.8878% ( 28) 00:10:59.952 12.742 - 12.800: 88.2838% ( 36) 00:10:59.952 12.800 - 12.858: 88.6579% ( 34) 00:10:59.952 12.858 - 12.916: 88.9219% ( 24) 00:10:59.952 12.916 - 12.975: 89.0869% ( 15) 00:10:59.952 12.975 - 13.033: 89.2959% ( 19) 00:10:59.952 13.033 - 13.091: 89.4279% ( 12) 00:10:59.952 13.091 - 13.149: 89.5270% ( 9) 00:10:59.952 13.149 - 13.207: 89.6040% ( 7) 00:10:59.952 13.207 - 13.265: 89.6590% ( 5) 00:10:59.952 13.265 - 13.324: 89.7250% ( 6) 00:10:59.952 13.324 - 13.382: 89.7910% ( 6) 00:10:59.952 13.382 - 13.440: 89.8240% ( 3) 00:10:59.952 13.440 - 13.498: 89.8350% ( 1) 00:10:59.952 13.498 - 13.556: 89.8460% ( 1) 00:10:59.952 13.556 - 13.615: 89.8680% ( 2) 00:10:59.952 13.615 - 13.673: 89.9010% ( 3) 00:10:59.952 13.673 - 13.731: 89.9120% ( 1) 00:10:59.952 13.731 - 13.789: 89.9450% ( 3) 00:10:59.952 13.789 - 13.847: 89.9670% ( 2) 00:10:59.952 13.847 - 13.905: 89.9890% ( 2) 00:10:59.952 13.905 - 13.964: 90.0110% ( 2) 00:10:59.952 13.964 - 14.022: 90.0440% ( 3) 00:10:59.952 14.080 - 14.138: 90.0660% ( 2) 00:10:59.952 14.138 - 14.196: 90.1100% ( 4) 00:10:59.952 14.196 - 14.255: 90.1210% ( 1) 00:10:59.952 14.255 - 14.313: 90.1540% ( 3) 00:10:59.952 14.313 - 14.371: 90.1870% ( 3) 00:10:59.952 14.371 - 14.429: 90.2090% ( 2) 00:10:59.952 14.429 - 14.487: 90.2530% ( 4) 00:10:59.952 14.487 - 14.545: 90.3410% ( 8) 00:10:59.952 14.545 - 14.604: 90.3850% ( 4) 00:10:59.952 14.604 - 14.662: 90.4730% ( 8) 00:10:59.952 14.662 - 14.720: 90.5611% ( 8) 00:10:59.952 14.720 - 14.778: 90.6821% ( 11) 00:10:59.952 14.778 - 14.836: 90.7481% ( 6) 00:10:59.952 14.836 - 14.895: 90.8361% ( 8) 00:10:59.952 14.895 - 15.011: 90.9021% ( 6) 00:10:59.952 15.011 - 15.127: 90.9241% ( 2) 00:10:59.952 15.127 - 15.244: 90.9681% ( 4) 00:10:59.952 15.244 - 15.360: 91.0121% ( 4) 00:10:59.952 15.360 - 15.476: 91.0671% ( 5) 00:10:59.952 15.476 - 15.593: 91.0781% ( 1) 00:10:59.952 15.593 - 15.709: 91.1111% ( 3) 00:10:59.952 15.709 - 15.825: 91.1331% ( 2) 00:10:59.952 15.825 - 15.942: 91.1881% ( 5) 00:10:59.952 15.942 - 16.058: 91.2211% ( 3) 00:10:59.952 16.058 - 16.175: 91.2651% ( 4) 00:10:59.952 16.175 - 16.291: 91.3311% ( 6) 00:10:59.952 16.291 - 16.407: 91.3531% ( 2) 00:10:59.952 16.407 - 16.524: 91.3971% ( 4) 00:10:59.952 16.524 - 16.640: 91.4521% ( 5) 00:10:59.952 16.640 - 16.756: 91.5402% ( 8) 00:10:59.952 16.756 - 16.873: 91.6172% ( 7) 00:10:59.952 16.873 - 16.989: 91.6722% ( 5) 00:10:59.952 16.989 - 17.105: 91.7492% ( 7) 00:10:59.952 17.105 - 17.222: 91.8702% ( 11) 00:10:59.952 17.222 - 17.338: 91.9252% ( 5) 00:10:59.952 17.338 - 17.455: 91.9692% ( 4) 00:10:59.952 17.455 - 17.571: 92.0792% ( 10) 00:10:59.952 17.571 - 17.687: 92.2332% ( 14) 00:10:59.952 17.687 - 17.804: 92.2772% ( 4) 00:10:59.952 17.804 - 17.920: 92.3982% ( 11) 00:10:59.952 17.920 - 18.036: 92.4532% ( 5) 00:10:59.952 18.036 - 18.153: 92.5413% ( 8) 00:10:59.952 18.153 - 18.269: 92.6183% ( 7) 00:10:59.952 18.269 - 18.385: 92.6953% ( 7) 00:10:59.952 18.385 - 18.502: 92.7173% ( 2) 00:10:59.952 18.502 - 18.618: 92.7723% ( 5) 00:10:59.952 18.618 - 18.735: 92.7943% ( 2) 00:10:59.952 18.735 - 18.851: 92.8273% ( 3) 00:10:59.952 18.851 - 18.967: 92.8493% ( 2) 00:10:59.952 18.967 - 19.084: 92.8713% ( 2) 00:10:59.952 19.200 - 19.316: 92.8933% ( 2) 00:10:59.952 19.316 - 19.433: 92.9043% ( 1) 00:10:59.952 19.433 - 19.549: 92.9263% ( 2) 00:10:59.952 19.549 - 19.665: 92.9813% ( 5) 00:10:59.952 19.665 - 19.782: 92.9923% ( 1) 00:10:59.952 19.782 - 19.898: 93.0253% ( 3) 00:10:59.952 19.898 - 20.015: 93.0363% ( 1) 00:10:59.952 20.015 - 20.131: 93.0473% ( 1) 00:10:59.952 20.131 - 20.247: 93.0583% ( 1) 00:10:59.952 20.247 - 20.364: 93.0803% ( 2) 00:10:59.952 20.480 - 20.596: 93.1243% ( 4) 00:10:59.952 20.713 - 20.829: 93.1463% ( 2) 00:10:59.952 20.829 - 20.945: 93.1793% ( 3) 00:10:59.952 20.945 - 21.062: 93.2233% ( 4) 00:10:59.952 21.062 - 21.178: 93.2453% ( 2) 00:10:59.952 21.178 - 21.295: 93.2673% ( 2) 00:10:59.952 21.295 - 21.411: 93.3223% ( 5) 00:10:59.952 21.411 - 21.527: 93.3553% ( 3) 00:10:59.952 21.644 - 21.760: 93.3773% ( 2) 00:10:59.952 21.760 - 21.876: 93.3883% ( 1) 00:10:59.952 21.876 - 21.993: 93.4103% ( 2) 00:10:59.953 21.993 - 22.109: 93.4433% ( 3) 00:10:59.953 22.225 - 22.342: 93.4653% ( 2) 00:10:59.953 22.342 - 22.458: 93.4873% ( 2) 00:10:59.953 22.458 - 22.575: 93.5094% ( 2) 00:10:59.953 22.575 - 22.691: 93.5424% ( 3) 00:10:59.953 22.691 - 22.807: 93.5754% ( 3) 00:10:59.953 22.807 - 22.924: 93.6084% ( 3) 00:10:59.953 23.273 - 23.389: 93.6414% ( 3) 00:10:59.953 23.505 - 23.622: 93.7074% ( 6) 00:10:59.953 23.622 - 23.738: 93.7954% ( 8) 00:10:59.953 23.738 - 23.855: 93.9384% ( 13) 00:10:59.953 23.855 - 23.971: 94.2134% ( 25) 00:10:59.953 23.971 - 24.087: 94.6755% ( 42) 00:10:59.953 24.087 - 24.204: 95.0385% ( 33) 00:10:59.953 24.204 - 24.320: 95.4785% ( 40) 00:10:59.953 24.320 - 24.436: 95.9956% ( 47) 00:10:59.953 24.436 - 24.553: 96.5017% ( 46) 00:10:59.953 24.553 - 24.669: 97.0627% ( 51) 00:10:59.953 24.669 - 24.785: 97.4477% ( 35) 00:10:59.953 24.785 - 24.902: 97.8548% ( 37) 00:10:59.953 24.902 - 25.018: 98.1738% ( 29) 00:10:59.953 25.018 - 25.135: 98.3058% ( 12) 00:10:59.953 25.135 - 25.251: 98.4048% ( 9) 00:10:59.953 25.251 - 25.367: 98.4818% ( 7) 00:10:59.953 25.367 - 25.484: 98.5809% ( 9) 00:10:59.953 25.484 - 25.600: 98.6469% ( 6) 00:10:59.953 25.600 - 25.716: 98.7019% ( 5) 00:10:59.953 25.716 - 25.833: 98.7459% ( 4) 00:10:59.953 25.833 - 25.949: 98.7789% ( 3) 00:10:59.953 25.949 - 26.065: 98.8119% ( 3) 00:10:59.953 26.065 - 26.182: 98.8779% ( 6) 00:10:59.953 26.182 - 26.298: 98.8889% ( 1) 00:10:59.953 26.298 - 26.415: 98.9439% ( 5) 00:10:59.953 26.415 - 26.531: 98.9989% ( 5) 00:10:59.953 26.531 - 26.647: 99.0209% ( 2) 00:10:59.953 26.647 - 26.764: 99.0429% ( 2) 00:10:59.953 26.764 - 26.880: 99.0759% ( 3) 00:10:59.953 26.880 - 26.996: 99.0979% ( 2) 00:10:59.953 27.113 - 27.229: 99.1419% ( 4) 00:10:59.953 27.229 - 27.345: 99.1749% ( 3) 00:10:59.953 27.345 - 27.462: 99.1859% ( 1) 00:10:59.953 27.578 - 27.695: 99.2079% ( 2) 00:10:59.953 27.811 - 27.927: 99.2189% ( 1) 00:10:59.953 28.742 - 28.858: 99.2299% ( 1) 00:10:59.953 28.975 - 29.091: 99.2409% ( 1) 00:10:59.953 29.091 - 29.207: 99.2519% ( 1) 00:10:59.953 29.207 - 29.324: 99.2629% ( 1) 00:10:59.953 29.556 - 29.673: 99.2739% ( 1) 00:10:59.953 29.789 - 30.022: 99.2849% ( 1) 00:10:59.953 30.022 - 30.255: 99.3289% ( 4) 00:10:59.953 30.487 - 30.720: 99.3619% ( 3) 00:10:59.953 30.720 - 30.953: 99.3839% ( 2) 00:10:59.953 30.953 - 31.185: 99.4609% ( 7) 00:10:59.953 31.185 - 31.418: 99.4939% ( 3) 00:10:59.953 31.418 - 31.651: 99.5160% ( 2) 00:10:59.953 31.651 - 31.884: 99.5380% ( 2) 00:10:59.953 32.116 - 32.349: 99.6150% ( 7) 00:10:59.953 32.349 - 32.582: 99.6370% ( 2) 00:10:59.953 32.582 - 32.815: 99.6480% ( 1) 00:10:59.953 32.815 - 33.047: 99.6700% ( 2) 00:10:59.953 33.047 - 33.280: 99.7030% ( 3) 00:10:59.953 33.280 - 33.513: 99.7140% ( 1) 00:10:59.953 33.745 - 33.978: 99.7360% ( 2) 00:10:59.953 34.211 - 34.444: 99.7690% ( 3) 00:10:59.953 34.444 - 34.676: 99.8020% ( 3) 00:10:59.953 34.909 - 35.142: 99.8130% ( 1) 00:10:59.953 37.004 - 37.236: 99.8240% ( 1) 00:10:59.953 37.469 - 37.702: 99.8350% ( 1) 00:10:59.953 37.702 - 37.935: 99.8460% ( 1) 00:10:59.953 38.167 - 38.400: 99.8570% ( 1) 00:10:59.953 38.400 - 38.633: 99.8680% ( 1) 00:10:59.953 38.633 - 38.865: 99.8790% ( 1) 00:10:59.953 39.564 - 39.796: 99.9120% ( 3) 00:10:59.953 40.727 - 40.960: 99.9230% ( 1) 00:10:59.953 40.960 - 41.193: 99.9340% ( 1) 00:10:59.953 41.193 - 41.425: 99.9450% ( 1) 00:10:59.953 42.356 - 42.589: 99.9560% ( 1) 00:10:59.953 44.218 - 44.451: 99.9670% ( 1) 00:10:59.953 45.615 - 45.847: 99.9780% ( 1) 00:10:59.953 66.560 - 67.025: 99.9890% ( 1) 00:10:59.953 71.680 - 72.145: 100.0000% ( 1) 00:10:59.953 00:10:59.953 00:10:59.953 real 0m1.297s 00:10:59.953 user 0m1.106s 00:10:59.953 sys 0m0.143s 00:10:59.953 04:51:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:59.953 04:51:06 -- common/autotest_common.sh@10 -- # set +x 00:10:59.953 ************************************ 00:10:59.953 END TEST nvme_overhead 00:10:59.953 ************************************ 00:10:59.953 04:51:06 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:10:59.953 04:51:06 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:10:59.953 04:51:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:59.953 04:51:06 -- common/autotest_common.sh@10 -- # set +x 00:10:59.953 ************************************ 00:10:59.953 START TEST nvme_arbitration 00:10:59.953 ************************************ 00:10:59.953 04:51:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:11:03.240 Initializing NVMe Controllers 00:11:03.240 Attached to 0000:00:06.0 00:11:03.240 Attached to 0000:00:07.0 00:11:03.240 Attached to 0000:00:09.0 00:11:03.240 Attached to 0000:00:08.0 00:11:03.240 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:11:03.240 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:11:03.240 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:11:03.240 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:11:03.240 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:11:03.240 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:11:03.240 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:11:03.240 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:11:03.240 Initialization complete. Launching workers. 00:11:03.240 Starting thread on core 1 with urgent priority queue 00:11:03.240 Starting thread on core 2 with urgent priority queue 00:11:03.240 Starting thread on core 3 with urgent priority queue 00:11:03.240 Starting thread on core 0 with urgent priority queue 00:11:03.240 QEMU NVMe Ctrl (12340 ) core 0: 618.67 IO/s 161.64 secs/100000 ios 00:11:03.240 QEMU NVMe Ctrl (12342 ) core 0: 618.67 IO/s 161.64 secs/100000 ios 00:11:03.240 QEMU NVMe Ctrl (12341 ) core 1: 661.33 IO/s 151.21 secs/100000 ios 00:11:03.240 QEMU NVMe Ctrl (12342 ) core 1: 661.33 IO/s 151.21 secs/100000 ios 00:11:03.240 QEMU NVMe Ctrl (12343 ) core 2: 704.00 IO/s 142.05 secs/100000 ios 00:11:03.240 QEMU NVMe Ctrl (12342 ) core 3: 618.67 IO/s 161.64 secs/100000 ios 00:11:03.240 ======================================================== 00:11:03.240 00:11:03.240 00:11:03.240 real 0m3.501s 00:11:03.240 user 0m9.506s 00:11:03.240 sys 0m0.154s 00:11:03.240 04:51:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:03.240 ************************************ 00:11:03.240 04:51:10 -- common/autotest_common.sh@10 -- # set +x 00:11:03.240 END TEST nvme_arbitration 00:11:03.240 ************************************ 00:11:03.499 04:51:10 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:11:03.499 04:51:10 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:11:03.499 04:51:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:03.499 04:51:10 -- common/autotest_common.sh@10 -- # set +x 00:11:03.499 ************************************ 00:11:03.499 START TEST nvme_single_aen 00:11:03.499 ************************************ 00:11:03.499 04:51:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:11:03.499 [2024-05-12 04:51:10.433940] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:03.499 [2024-05-12 04:51:10.434032] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:03.499 [2024-05-12 04:51:10.595477] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:11:03.499 [2024-05-12 04:51:10.597042] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:11:03.499 [2024-05-12 04:51:10.598422] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:11:03.499 [2024-05-12 04:51:10.599854] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:11:03.758 Asynchronous Event Request test 00:11:03.758 Attached to 0000:00:06.0 00:11:03.758 Attached to 0000:00:07.0 00:11:03.758 Attached to 0000:00:09.0 00:11:03.758 Attached to 0000:00:08.0 00:11:03.758 Reset controller to setup AER completions for this process 00:11:03.758 Registering asynchronous event callbacks... 00:11:03.758 Getting orig temperature thresholds of all controllers 00:11:03.759 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:03.759 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:03.759 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:03.759 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:03.759 Setting all controllers temperature threshold low to trigger AER 00:11:03.759 Waiting for all controllers temperature threshold to be set lower 00:11:03.759 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:03.759 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:11:03.759 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:03.759 aer_cb - Resetting Temp Threshold for device: 0000:00:07.0 00:11:03.759 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:03.759 aer_cb - Resetting Temp Threshold for device: 0000:00:09.0 00:11:03.759 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:03.759 aer_cb - Resetting Temp Threshold for device: 0000:00:08.0 00:11:03.759 Waiting for all controllers to trigger AER and reset threshold 00:11:03.759 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:03.759 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:03.759 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:03.759 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:03.759 Cleaning up... 00:11:03.759 00:11:03.759 real 0m0.248s 00:11:03.759 user 0m0.087s 00:11:03.759 sys 0m0.116s 00:11:03.759 04:51:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:03.759 04:51:10 -- common/autotest_common.sh@10 -- # set +x 00:11:03.759 ************************************ 00:11:03.759 END TEST nvme_single_aen 00:11:03.759 ************************************ 00:11:03.759 04:51:10 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:11:03.759 04:51:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:03.759 04:51:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:03.759 04:51:10 -- common/autotest_common.sh@10 -- # set +x 00:11:03.759 ************************************ 00:11:03.759 START TEST nvme_doorbell_aers 00:11:03.759 ************************************ 00:11:03.759 04:51:10 -- common/autotest_common.sh@1104 -- # nvme_doorbell_aers 00:11:03.759 04:51:10 -- nvme/nvme.sh@70 -- # bdfs=() 00:11:03.759 04:51:10 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:11:03.759 04:51:10 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:11:03.759 04:51:10 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:11:03.759 04:51:10 -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:03.759 04:51:10 -- common/autotest_common.sh@1498 -- # local bdfs 00:11:03.759 04:51:10 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:03.759 04:51:10 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:03.759 04:51:10 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:03.759 04:51:10 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:11:03.759 04:51:10 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:11:03.759 04:51:10 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:03.759 04:51:10 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0' 00:11:04.018 [2024-05-12 04:51:10.992809] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64977) is not found. Dropping the request. 00:11:13.994 Executing: test_write_invalid_db 00:11:13.994 Waiting for AER completion... 00:11:13.994 Failure: test_write_invalid_db 00:11:13.994 00:11:13.994 Executing: test_invalid_db_write_overflow_sq 00:11:13.994 Waiting for AER completion... 00:11:13.994 Failure: test_invalid_db_write_overflow_sq 00:11:13.994 00:11:13.994 Executing: test_invalid_db_write_overflow_cq 00:11:13.994 Waiting for AER completion... 00:11:13.994 Failure: test_invalid_db_write_overflow_cq 00:11:13.994 00:11:13.994 04:51:20 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:13.994 04:51:20 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:07.0' 00:11:13.994 [2024-05-12 04:51:21.052164] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64977) is not found. Dropping the request. 00:11:23.967 Executing: test_write_invalid_db 00:11:23.967 Waiting for AER completion... 00:11:23.967 Failure: test_write_invalid_db 00:11:23.967 00:11:23.967 Executing: test_invalid_db_write_overflow_sq 00:11:23.967 Waiting for AER completion... 00:11:23.967 Failure: test_invalid_db_write_overflow_sq 00:11:23.967 00:11:23.967 Executing: test_invalid_db_write_overflow_cq 00:11:23.967 Waiting for AER completion... 00:11:23.967 Failure: test_invalid_db_write_overflow_cq 00:11:23.967 00:11:23.967 04:51:30 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:23.967 04:51:30 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:08.0' 00:11:24.226 [2024-05-12 04:51:31.108578] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64977) is not found. Dropping the request. 00:11:34.203 Executing: test_write_invalid_db 00:11:34.203 Waiting for AER completion... 00:11:34.203 Failure: test_write_invalid_db 00:11:34.203 00:11:34.203 Executing: test_invalid_db_write_overflow_sq 00:11:34.203 Waiting for AER completion... 00:11:34.203 Failure: test_invalid_db_write_overflow_sq 00:11:34.203 00:11:34.203 Executing: test_invalid_db_write_overflow_cq 00:11:34.203 Waiting for AER completion... 00:11:34.203 Failure: test_invalid_db_write_overflow_cq 00:11:34.203 00:11:34.203 04:51:40 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:34.203 04:51:40 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:09.0' 00:11:34.203 [2024-05-12 04:51:41.161130] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64977) is not found. Dropping the request. 00:11:44.180 Executing: test_write_invalid_db 00:11:44.180 Waiting for AER completion... 00:11:44.180 Failure: test_write_invalid_db 00:11:44.180 00:11:44.180 Executing: test_invalid_db_write_overflow_sq 00:11:44.180 Waiting for AER completion... 00:11:44.180 Failure: test_invalid_db_write_overflow_sq 00:11:44.180 00:11:44.180 Executing: test_invalid_db_write_overflow_cq 00:11:44.180 Waiting for AER completion... 00:11:44.180 Failure: test_invalid_db_write_overflow_cq 00:11:44.180 00:11:44.180 00:11:44.180 real 0m40.232s 00:11:44.180 user 0m33.681s 00:11:44.180 sys 0m6.193s 00:11:44.180 04:51:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:44.180 04:51:50 -- common/autotest_common.sh@10 -- # set +x 00:11:44.180 ************************************ 00:11:44.181 END TEST nvme_doorbell_aers 00:11:44.181 ************************************ 00:11:44.181 04:51:50 -- nvme/nvme.sh@97 -- # uname 00:11:44.181 04:51:50 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:11:44.181 04:51:50 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:11:44.181 04:51:50 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:11:44.181 04:51:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:44.181 04:51:50 -- common/autotest_common.sh@10 -- # set +x 00:11:44.181 ************************************ 00:11:44.181 START TEST nvme_multi_aen 00:11:44.181 ************************************ 00:11:44.181 04:51:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:11:44.181 [2024-05-12 04:51:51.028385] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:44.181 [2024-05-12 04:51:51.028520] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:44.181 [2024-05-12 04:51:51.220467] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:11:44.181 [2024-05-12 04:51:51.220560] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64977) is not found. Dropping the request. 00:11:44.181 [2024-05-12 04:51:51.220602] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64977) is not found. Dropping the request. 00:11:44.181 [2024-05-12 04:51:51.220621] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64977) is not found. Dropping the request. 00:11:44.181 [2024-05-12 04:51:51.222343] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:11:44.181 [2024-05-12 04:51:51.222381] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64977) is not found. Dropping the request. 00:11:44.181 [2024-05-12 04:51:51.222405] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64977) is not found. Dropping the request. 00:11:44.181 [2024-05-12 04:51:51.222423] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64977) is not found. Dropping the request. 00:11:44.181 [2024-05-12 04:51:51.223753] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:11:44.181 [2024-05-12 04:51:51.223786] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64977) is not found. Dropping the request. 00:11:44.181 [2024-05-12 04:51:51.223807] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64977) is not found. Dropping the request. 00:11:44.181 [2024-05-12 04:51:51.223828] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64977) is not found. Dropping the request. 00:11:44.181 [2024-05-12 04:51:51.224988] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:11:44.181 [2024-05-12 04:51:51.225016] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64977) is not found. Dropping the request. 00:11:44.181 [2024-05-12 04:51:51.225036] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64977) is not found. Dropping the request. 00:11:44.181 [2024-05-12 04:51:51.225054] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64977) is not found. Dropping the request. 00:11:44.181 Child process pid: 65495 00:11:44.181 [2024-05-12 04:51:51.230927] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:44.181 [2024-05-12 04:51:51.231035] [ DPDK EAL parameters: aer -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:44.440 [Child] Asynchronous Event Request test 00:11:44.440 [Child] Attached to 0000:00:06.0 00:11:44.440 [Child] Attached to 0000:00:07.0 00:11:44.440 [Child] Attached to 0000:00:09.0 00:11:44.440 [Child] Attached to 0000:00:08.0 00:11:44.440 [Child] Registering asynchronous event callbacks... 00:11:44.440 [Child] Getting orig temperature thresholds of all controllers 00:11:44.440 [Child] 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:44.440 [Child] 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:44.440 [Child] 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:44.440 [Child] 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:44.440 [Child] Waiting for all controllers to trigger AER and reset threshold 00:11:44.440 [Child] 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:44.440 [Child] 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:44.440 [Child] 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:44.440 [Child] 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:44.440 [Child] 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:44.440 [Child] 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:44.440 [Child] 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:44.440 [Child] 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:44.440 [Child] Cleaning up... 00:11:44.440 Asynchronous Event Request test 00:11:44.440 Attached to 0000:00:06.0 00:11:44.440 Attached to 0000:00:07.0 00:11:44.440 Attached to 0000:00:09.0 00:11:44.440 Attached to 0000:00:08.0 00:11:44.440 Reset controller to setup AER completions for this process 00:11:44.440 Registering asynchronous event callbacks... 00:11:44.440 Getting orig temperature thresholds of all controllers 00:11:44.440 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:44.440 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:44.440 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:44.440 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:44.440 Setting all controllers temperature threshold low to trigger AER 00:11:44.440 Waiting for all controllers temperature threshold to be set lower 00:11:44.440 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:44.440 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:11:44.440 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:44.440 aer_cb - Resetting Temp Threshold for device: 0000:00:07.0 00:11:44.440 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:44.440 aer_cb - Resetting Temp Threshold for device: 0000:00:09.0 00:11:44.440 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:44.440 aer_cb - Resetting Temp Threshold for device: 0000:00:08.0 00:11:44.440 Waiting for all controllers to trigger AER and reset threshold 00:11:44.440 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:44.440 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:44.440 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:44.440 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:44.440 Cleaning up... 00:11:44.440 00:11:44.440 real 0m0.532s 00:11:44.440 user 0m0.209s 00:11:44.440 sys 0m0.223s 00:11:44.440 04:51:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:44.440 04:51:51 -- common/autotest_common.sh@10 -- # set +x 00:11:44.440 ************************************ 00:11:44.440 END TEST nvme_multi_aen 00:11:44.440 ************************************ 00:11:44.440 04:51:51 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:11:44.440 04:51:51 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:11:44.440 04:51:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:44.440 04:51:51 -- common/autotest_common.sh@10 -- # set +x 00:11:44.440 ************************************ 00:11:44.440 START TEST nvme_startup 00:11:44.440 ************************************ 00:11:44.440 04:51:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:11:44.699 Initializing NVMe Controllers 00:11:44.699 Attached to 0000:00:06.0 00:11:44.699 Attached to 0000:00:07.0 00:11:44.699 Attached to 0000:00:09.0 00:11:44.699 Attached to 0000:00:08.0 00:11:44.699 Initialization complete. 00:11:44.699 Time used:190794.172 (us). 00:11:44.699 00:11:44.699 real 0m0.265s 00:11:44.699 user 0m0.101s 00:11:44.699 sys 0m0.119s 00:11:44.699 04:51:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:44.699 04:51:51 -- common/autotest_common.sh@10 -- # set +x 00:11:44.699 ************************************ 00:11:44.699 END TEST nvme_startup 00:11:44.699 ************************************ 00:11:44.958 04:51:51 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:11:44.958 04:51:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:44.958 04:51:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:44.958 04:51:51 -- common/autotest_common.sh@10 -- # set +x 00:11:44.958 ************************************ 00:11:44.958 START TEST nvme_multi_secondary 00:11:44.958 ************************************ 00:11:44.958 04:51:51 -- common/autotest_common.sh@1104 -- # nvme_multi_secondary 00:11:44.958 04:51:51 -- nvme/nvme.sh@52 -- # pid0=65540 00:11:44.958 04:51:51 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:11:44.958 04:51:51 -- nvme/nvme.sh@54 -- # pid1=65541 00:11:44.958 04:51:51 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:11:44.958 04:51:51 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:11:49.169 Initializing NVMe Controllers 00:11:49.169 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:11:49.169 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:11:49.169 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:11:49.169 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:11:49.169 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:11:49.169 Associating PCIE (0000:00:07.0) NSID 1 with lcore 1 00:11:49.169 Associating PCIE (0000:00:09.0) NSID 1 with lcore 1 00:11:49.169 Associating PCIE (0000:00:08.0) NSID 1 with lcore 1 00:11:49.169 Associating PCIE (0000:00:08.0) NSID 2 with lcore 1 00:11:49.169 Associating PCIE (0000:00:08.0) NSID 3 with lcore 1 00:11:49.169 Initialization complete. Launching workers. 00:11:49.169 ======================================================== 00:11:49.169 Latency(us) 00:11:49.169 Device Information : IOPS MiB/s Average min max 00:11:49.169 PCIE (0000:00:06.0) NSID 1 from core 1: 5432.87 21.22 2943.27 1110.24 6243.04 00:11:49.169 PCIE (0000:00:07.0) NSID 1 from core 1: 5432.87 21.22 2944.46 1135.52 5726.86 00:11:49.169 PCIE (0000:00:09.0) NSID 1 from core 1: 5432.87 21.22 2944.43 1173.03 5452.45 00:11:49.169 PCIE (0000:00:08.0) NSID 1 from core 1: 5432.87 21.22 2944.42 1137.86 6004.79 00:11:49.169 PCIE (0000:00:08.0) NSID 2 from core 1: 5432.87 21.22 2944.37 1113.74 6611.18 00:11:49.169 PCIE (0000:00:08.0) NSID 3 from core 1: 5432.87 21.22 2946.30 1105.08 12434.81 00:11:49.169 ======================================================== 00:11:49.169 Total : 32597.24 127.33 2944.54 1105.08 12434.81 00:11:49.169 00:11:49.169 Initializing NVMe Controllers 00:11:49.169 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:11:49.169 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:11:49.169 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:11:49.169 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:11:49.169 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:11:49.169 Associating PCIE (0000:00:07.0) NSID 1 with lcore 2 00:11:49.169 Associating PCIE (0000:00:09.0) NSID 1 with lcore 2 00:11:49.169 Associating PCIE (0000:00:08.0) NSID 1 with lcore 2 00:11:49.169 Associating PCIE (0000:00:08.0) NSID 2 with lcore 2 00:11:49.169 Associating PCIE (0000:00:08.0) NSID 3 with lcore 2 00:11:49.169 Initialization complete. Launching workers. 00:11:49.169 ======================================================== 00:11:49.169 Latency(us) 00:11:49.169 Device Information : IOPS MiB/s Average min max 00:11:49.169 PCIE (0000:00:06.0) NSID 1 from core 2: 2478.18 9.68 6453.67 1376.04 13461.26 00:11:49.169 PCIE (0000:00:07.0) NSID 1 from core 2: 2478.18 9.68 6456.23 1265.91 12875.67 00:11:49.169 PCIE (0000:00:09.0) NSID 1 from core 2: 2478.18 9.68 6464.83 1376.36 12817.97 00:11:49.169 PCIE (0000:00:08.0) NSID 1 from core 2: 2478.18 9.68 6465.10 1389.33 12884.99 00:11:49.169 PCIE (0000:00:08.0) NSID 2 from core 2: 2478.18 9.68 6465.44 1381.99 12469.80 00:11:49.169 PCIE (0000:00:08.0) NSID 3 from core 2: 2478.18 9.68 6465.43 1382.08 12804.85 00:11:49.169 ======================================================== 00:11:49.169 Total : 14869.10 58.08 6461.78 1265.91 13461.26 00:11:49.169 00:11:49.169 04:51:55 -- nvme/nvme.sh@56 -- # wait 65540 00:11:50.545 Initializing NVMe Controllers 00:11:50.545 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:11:50.545 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:11:50.545 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:11:50.545 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:11:50.545 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:11:50.545 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:11:50.545 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:11:50.545 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:11:50.545 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:11:50.545 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:11:50.545 Initialization complete. Launching workers. 00:11:50.545 ======================================================== 00:11:50.545 Latency(us) 00:11:50.545 Device Information : IOPS MiB/s Average min max 00:11:50.545 PCIE (0000:00:06.0) NSID 1 from core 0: 8596.93 33.58 1859.71 966.29 12850.73 00:11:50.545 PCIE (0000:00:07.0) NSID 1 from core 0: 8596.93 33.58 1860.64 1007.81 12986.26 00:11:50.545 PCIE (0000:00:09.0) NSID 1 from core 0: 8596.93 33.58 1860.58 892.42 12766.92 00:11:50.545 PCIE (0000:00:08.0) NSID 1 from core 0: 8596.93 33.58 1860.53 882.28 13448.21 00:11:50.545 PCIE (0000:00:08.0) NSID 2 from core 0: 8596.93 33.58 1860.46 812.11 13178.58 00:11:50.545 PCIE (0000:00:08.0) NSID 3 from core 0: 8596.93 33.58 1860.40 733.08 13275.55 00:11:50.545 ======================================================== 00:11:50.545 Total : 51581.58 201.49 1860.39 733.08 13448.21 00:11:50.545 00:11:50.545 04:51:57 -- nvme/nvme.sh@57 -- # wait 65541 00:11:50.545 04:51:57 -- nvme/nvme.sh@61 -- # pid0=65622 00:11:50.545 04:51:57 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:11:50.545 04:51:57 -- nvme/nvme.sh@63 -- # pid1=65623 00:11:50.545 04:51:57 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:11:50.545 04:51:57 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:11:54.726 Initializing NVMe Controllers 00:11:54.726 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:11:54.726 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:11:54.726 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:11:54.726 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:11:54.726 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:11:54.726 Associating PCIE (0000:00:07.0) NSID 1 with lcore 1 00:11:54.726 Associating PCIE (0000:00:09.0) NSID 1 with lcore 1 00:11:54.726 Associating PCIE (0000:00:08.0) NSID 1 with lcore 1 00:11:54.726 Associating PCIE (0000:00:08.0) NSID 2 with lcore 1 00:11:54.726 Associating PCIE (0000:00:08.0) NSID 3 with lcore 1 00:11:54.726 Initialization complete. Launching workers. 00:11:54.726 ======================================================== 00:11:54.726 Latency(us) 00:11:54.726 Device Information : IOPS MiB/s Average min max 00:11:54.726 PCIE (0000:00:06.0) NSID 1 from core 1: 5486.35 21.43 2914.61 1033.03 7384.50 00:11:54.726 PCIE (0000:00:07.0) NSID 1 from core 1: 5486.35 21.43 2915.70 1057.78 5898.31 00:11:54.726 PCIE (0000:00:09.0) NSID 1 from core 1: 5486.35 21.43 2915.59 1034.01 5930.43 00:11:54.726 PCIE (0000:00:08.0) NSID 1 from core 1: 5486.35 21.43 2915.69 1036.93 5753.79 00:11:54.726 PCIE (0000:00:08.0) NSID 2 from core 1: 5486.35 21.43 2915.61 1033.25 6266.98 00:11:54.726 PCIE (0000:00:08.0) NSID 3 from core 1: 5486.35 21.43 2915.61 1043.56 6871.67 00:11:54.726 ======================================================== 00:11:54.727 Total : 32918.09 128.59 2915.47 1033.03 7384.50 00:11:54.727 00:11:54.727 Initializing NVMe Controllers 00:11:54.727 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:11:54.727 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:11:54.727 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:11:54.727 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:11:54.727 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:11:54.727 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:11:54.727 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:11:54.727 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:11:54.727 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:11:54.727 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:11:54.727 Initialization complete. Launching workers. 00:11:54.727 ======================================================== 00:11:54.727 Latency(us) 00:11:54.727 Device Information : IOPS MiB/s Average min max 00:11:54.727 PCIE (0000:00:06.0) NSID 1 from core 0: 5572.70 21.77 2869.40 975.61 11058.06 00:11:54.727 PCIE (0000:00:07.0) NSID 1 from core 0: 5572.70 21.77 2870.58 1014.45 10137.69 00:11:54.727 PCIE (0000:00:09.0) NSID 1 from core 0: 5572.70 21.77 2870.53 1022.46 10123.60 00:11:54.727 PCIE (0000:00:08.0) NSID 1 from core 0: 5572.70 21.77 2870.47 1025.98 10054.18 00:11:54.727 PCIE (0000:00:08.0) NSID 2 from core 0: 5572.70 21.77 2870.52 1010.46 10394.51 00:11:54.727 PCIE (0000:00:08.0) NSID 3 from core 0: 5572.70 21.77 2870.49 1016.97 10575.42 00:11:54.727 ======================================================== 00:11:54.727 Total : 33436.19 130.61 2870.33 975.61 11058.06 00:11:54.727 00:11:56.102 Initializing NVMe Controllers 00:11:56.102 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:11:56.102 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:11:56.102 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:11:56.102 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:11:56.102 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:11:56.102 Associating PCIE (0000:00:07.0) NSID 1 with lcore 2 00:11:56.102 Associating PCIE (0000:00:09.0) NSID 1 with lcore 2 00:11:56.102 Associating PCIE (0000:00:08.0) NSID 1 with lcore 2 00:11:56.102 Associating PCIE (0000:00:08.0) NSID 2 with lcore 2 00:11:56.102 Associating PCIE (0000:00:08.0) NSID 3 with lcore 2 00:11:56.102 Initialization complete. Launching workers. 00:11:56.102 ======================================================== 00:11:56.102 Latency(us) 00:11:56.102 Device Information : IOPS MiB/s Average min max 00:11:56.102 PCIE (0000:00:06.0) NSID 1 from core 2: 3852.34 15.05 4151.71 986.73 12986.42 00:11:56.102 PCIE (0000:00:07.0) NSID 1 from core 2: 3852.34 15.05 4151.33 950.56 12682.05 00:11:56.102 PCIE (0000:00:09.0) NSID 1 from core 2: 3852.34 15.05 4149.16 983.93 13690.84 00:11:56.102 PCIE (0000:00:08.0) NSID 1 from core 2: 3852.34 15.05 4149.28 910.00 12165.19 00:11:56.102 PCIE (0000:00:08.0) NSID 2 from core 2: 3852.34 15.05 4148.80 862.31 17251.72 00:11:56.102 PCIE (0000:00:08.0) NSID 3 from core 2: 3855.54 15.06 4145.34 801.01 16896.89 00:11:56.102 ======================================================== 00:11:56.102 Total : 23117.26 90.30 4149.27 801.01 17251.72 00:11:56.102 00:11:56.102 04:52:03 -- nvme/nvme.sh@65 -- # wait 65622 00:11:56.102 04:52:03 -- nvme/nvme.sh@66 -- # wait 65623 00:11:56.102 00:11:56.102 real 0m11.139s 00:11:56.102 user 0m19.144s 00:11:56.102 sys 0m0.868s 00:11:56.102 04:52:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:56.102 ************************************ 00:11:56.102 END TEST nvme_multi_secondary 00:11:56.102 ************************************ 00:11:56.102 04:52:03 -- common/autotest_common.sh@10 -- # set +x 00:11:56.102 04:52:03 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:11:56.102 04:52:03 -- nvme/nvme.sh@102 -- # kill_stub 00:11:56.102 04:52:03 -- common/autotest_common.sh@1065 -- # [[ -e /proc/64542 ]] 00:11:56.102 04:52:03 -- common/autotest_common.sh@1066 -- # kill 64542 00:11:56.102 04:52:03 -- common/autotest_common.sh@1067 -- # wait 64542 00:11:56.669 [2024-05-12 04:52:03.645434] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65494) is not found. Dropping the request. 00:11:56.669 [2024-05-12 04:52:03.645532] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65494) is not found. Dropping the request. 00:11:56.669 [2024-05-12 04:52:03.645558] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65494) is not found. Dropping the request. 00:11:56.669 [2024-05-12 04:52:03.645596] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65494) is not found. Dropping the request. 00:11:57.605 [2024-05-12 04:52:04.657250] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65494) is not found. Dropping the request. 00:11:57.605 [2024-05-12 04:52:04.657328] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65494) is not found. Dropping the request. 00:11:57.605 [2024-05-12 04:52:04.657351] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65494) is not found. Dropping the request. 00:11:57.605 [2024-05-12 04:52:04.657372] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65494) is not found. Dropping the request. 00:11:58.173 [2024-05-12 04:52:05.165313] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65494) is not found. Dropping the request. 00:11:58.173 [2024-05-12 04:52:05.165381] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65494) is not found. Dropping the request. 00:11:58.173 [2024-05-12 04:52:05.165405] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65494) is not found. Dropping the request. 00:11:58.173 [2024-05-12 04:52:05.165426] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65494) is not found. Dropping the request. 00:12:00.077 [2024-05-12 04:52:06.678028] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65494) is not found. Dropping the request. 00:12:00.077 [2024-05-12 04:52:06.678127] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65494) is not found. Dropping the request. 00:12:00.077 [2024-05-12 04:52:06.678152] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65494) is not found. Dropping the request. 00:12:00.077 [2024-05-12 04:52:06.678177] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65494) is not found. Dropping the request. 00:12:00.077 04:52:06 -- common/autotest_common.sh@1069 -- # rm -f /var/run/spdk_stub0 00:12:00.077 04:52:06 -- common/autotest_common.sh@1073 -- # echo 2 00:12:00.077 04:52:06 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:12:00.077 04:52:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:00.077 04:52:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:00.077 04:52:06 -- common/autotest_common.sh@10 -- # set +x 00:12:00.077 ************************************ 00:12:00.077 START TEST bdev_nvme_reset_stuck_adm_cmd 00:12:00.077 ************************************ 00:12:00.077 04:52:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:12:00.077 * Looking for test storage... 00:12:00.077 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:00.077 04:52:07 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:12:00.077 04:52:07 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:12:00.077 04:52:07 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:12:00.077 04:52:07 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:12:00.077 04:52:07 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:12:00.077 04:52:07 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:12:00.077 04:52:07 -- common/autotest_common.sh@1509 -- # bdfs=() 00:12:00.077 04:52:07 -- common/autotest_common.sh@1509 -- # local bdfs 00:12:00.077 04:52:07 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:12:00.077 04:52:07 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:12:00.077 04:52:07 -- common/autotest_common.sh@1498 -- # bdfs=() 00:12:00.077 04:52:07 -- common/autotest_common.sh@1498 -- # local bdfs 00:12:00.077 04:52:07 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:00.077 04:52:07 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:00.077 04:52:07 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:12:00.077 04:52:07 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:12:00.077 04:52:07 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:12:00.077 04:52:07 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:12:00.077 04:52:07 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0 00:12:00.077 04:52:07 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']' 00:12:00.077 04:52:07 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:12:00.077 04:52:07 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65806 00:12:00.077 04:52:07 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:00.077 04:52:07 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65806 00:12:00.077 04:52:07 -- common/autotest_common.sh@819 -- # '[' -z 65806 ']' 00:12:00.077 04:52:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.077 04:52:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:00.077 04:52:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.077 04:52:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:00.077 04:52:07 -- common/autotest_common.sh@10 -- # set +x 00:12:00.337 [2024-05-12 04:52:07.219243] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:00.337 [2024-05-12 04:52:07.219387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65806 ] 00:12:00.337 [2024-05-12 04:52:07.394317] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:00.595 [2024-05-12 04:52:07.577012] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:00.595 [2024-05-12 04:52:07.577577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:00.595 [2024-05-12 04:52:07.577638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:00.595 [2024-05-12 04:52:07.577694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.595 [2024-05-12 04:52:07.577700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:01.971 04:52:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:01.971 04:52:08 -- common/autotest_common.sh@852 -- # return 0 00:12:01.971 04:52:08 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0 00:12:01.971 04:52:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:01.971 04:52:08 -- common/autotest_common.sh@10 -- # set +x 00:12:01.971 nvme0n1 00:12:01.971 04:52:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:01.971 04:52:08 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:12:01.971 04:52:08 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_oogAc.txt 00:12:01.971 04:52:08 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:12:01.971 04:52:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:01.971 04:52:08 -- common/autotest_common.sh@10 -- # set +x 00:12:01.971 true 00:12:01.971 04:52:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:01.971 04:52:08 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:12:01.971 04:52:08 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1715489528 00:12:01.971 04:52:08 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65842 00:12:01.971 04:52:08 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:12:01.971 04:52:08 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:01.971 04:52:08 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:12:03.906 04:52:10 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:12:03.906 04:52:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:03.906 04:52:10 -- common/autotest_common.sh@10 -- # set +x 00:12:03.906 [2024-05-12 04:52:10.995057] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:12:03.906 [2024-05-12 04:52:10.995430] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:12:03.906 [2024-05-12 04:52:10.995475] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:03.906 [2024-05-12 04:52:10.995496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:03.906 [2024-05-12 04:52:10.997334] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:03.906 04:52:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:03.906 04:52:10 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65842 00:12:03.906 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65842 00:12:03.906 04:52:10 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65842 00:12:03.906 04:52:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:12:03.906 04:52:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=3 00:12:03.906 04:52:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:12:03.906 04:52:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:03.906 04:52:11 -- common/autotest_common.sh@10 -- # set +x 00:12:04.165 04:52:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:04.165 04:52:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:12:04.165 04:52:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_oogAc.txt 00:12:04.165 04:52:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:12:04.165 04:52:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:12:04.165 04:52:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:12:04.165 04:52:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:12:04.165 04:52:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:12:04.165 04:52:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:12:04.165 04:52:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:12:04.165 04:52:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:12:04.165 04:52:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:12:04.165 04:52:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:12:04.165 04:52:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:12:04.165 04:52:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:12:04.165 04:52:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:12:04.165 04:52:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:12:04.165 04:52:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:12:04.165 04:52:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:12:04.165 04:52:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:12:04.165 04:52:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:12:04.165 04:52:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:12:04.165 04:52:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_oogAc.txt 00:12:04.165 04:52:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65806 00:12:04.165 04:52:11 -- common/autotest_common.sh@926 -- # '[' -z 65806 ']' 00:12:04.165 04:52:11 -- common/autotest_common.sh@930 -- # kill -0 65806 00:12:04.165 04:52:11 -- common/autotest_common.sh@931 -- # uname 00:12:04.165 04:52:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:04.165 04:52:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65806 00:12:04.165 killing process with pid 65806 00:12:04.165 04:52:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:04.165 04:52:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:04.165 04:52:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65806' 00:12:04.166 04:52:11 -- common/autotest_common.sh@945 -- # kill 65806 00:12:04.166 04:52:11 -- common/autotest_common.sh@950 -- # wait 65806 00:12:06.699 04:52:13 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:12:06.699 04:52:13 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:12:06.699 00:12:06.699 real 0m6.260s 00:12:06.699 user 0m22.269s 00:12:06.699 sys 0m0.625s 00:12:06.699 04:52:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:06.699 04:52:13 -- common/autotest_common.sh@10 -- # set +x 00:12:06.699 ************************************ 00:12:06.699 END TEST bdev_nvme_reset_stuck_adm_cmd 00:12:06.699 ************************************ 00:12:06.699 04:52:13 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:12:06.699 04:52:13 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:12:06.699 04:52:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:06.699 04:52:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:06.699 04:52:13 -- common/autotest_common.sh@10 -- # set +x 00:12:06.699 ************************************ 00:12:06.699 START TEST nvme_fio 00:12:06.699 ************************************ 00:12:06.699 04:52:13 -- common/autotest_common.sh@1104 -- # nvme_fio_test 00:12:06.699 04:52:13 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:12:06.699 04:52:13 -- nvme/nvme.sh@32 -- # ran_fio=false 00:12:06.699 04:52:13 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:12:06.699 04:52:13 -- common/autotest_common.sh@1498 -- # bdfs=() 00:12:06.699 04:52:13 -- common/autotest_common.sh@1498 -- # local bdfs 00:12:06.699 04:52:13 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:06.699 04:52:13 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:06.699 04:52:13 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:12:06.699 04:52:13 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:12:06.699 04:52:13 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:12:06.699 04:52:13 -- nvme/nvme.sh@33 -- # bdfs=('0000:00:06.0' '0000:00:07.0' '0000:00:08.0' '0000:00:09.0') 00:12:06.699 04:52:13 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:12:06.699 04:52:13 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:06.699 04:52:13 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:12:06.699 04:52:13 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:06.699 04:52:13 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:12:06.699 04:52:13 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:06.958 04:52:13 -- nvme/nvme.sh@41 -- # bs=4096 00:12:06.958 04:52:13 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:12:06.958 04:52:13 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:12:06.958 04:52:13 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:12:06.958 04:52:13 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:06.958 04:52:13 -- common/autotest_common.sh@1318 -- # local sanitizers 00:12:06.958 04:52:13 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:06.958 04:52:13 -- common/autotest_common.sh@1320 -- # shift 00:12:06.958 04:52:13 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:12:06.958 04:52:13 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:12:06.958 04:52:13 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:06.958 04:52:13 -- common/autotest_common.sh@1324 -- # grep libasan 00:12:06.958 04:52:13 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:12:06.958 04:52:13 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:06.958 04:52:13 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:06.958 04:52:13 -- common/autotest_common.sh@1326 -- # break 00:12:06.958 04:52:13 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:06.958 04:52:13 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:12:07.217 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:07.217 fio-3.35 00:12:07.217 Starting 1 thread 00:12:10.506 00:12:10.506 test: (groupid=0, jobs=1): err= 0: pid=65993: Sun May 12 04:52:17 2024 00:12:10.506 read: IOPS=16.3k, BW=63.5MiB/s (66.6MB/s)(127MiB/2001msec) 00:12:10.506 slat (nsec): min=4624, max=89598, avg=6232.30, stdev=1925.33 00:12:10.506 clat (usec): min=281, max=8681, avg=3912.43, stdev=457.89 00:12:10.506 lat (usec): min=287, max=8728, avg=3918.66, stdev=458.56 00:12:10.506 clat percentiles (usec): 00:12:10.506 | 1.00th=[ 3425], 5.00th=[ 3523], 10.00th=[ 3589], 20.00th=[ 3654], 00:12:10.506 | 30.00th=[ 3687], 40.00th=[ 3720], 50.00th=[ 3752], 60.00th=[ 3818], 00:12:10.506 | 70.00th=[ 3884], 80.00th=[ 4293], 90.00th=[ 4490], 95.00th=[ 4686], 00:12:10.506 | 99.00th=[ 5473], 99.50th=[ 6128], 99.90th=[ 8029], 99.95th=[ 8160], 00:12:10.506 | 99.99th=[ 8455] 00:12:10.506 bw ( KiB/s): min=61352, max=68144, per=98.19%, avg=63850.67, stdev=3734.76, samples=3 00:12:10.506 iops : min=15338, max=17036, avg=15962.67, stdev=933.69, samples=3 00:12:10.506 write: IOPS=16.3k, BW=63.6MiB/s (66.7MB/s)(127MiB/2001msec); 0 zone resets 00:12:10.506 slat (nsec): min=4668, max=60892, avg=6251.09, stdev=1807.25 00:12:10.506 clat (usec): min=249, max=8542, avg=3920.46, stdev=463.79 00:12:10.506 lat (usec): min=255, max=8554, avg=3926.71, stdev=464.43 00:12:10.506 clat percentiles (usec): 00:12:10.506 | 1.00th=[ 3425], 5.00th=[ 3523], 10.00th=[ 3589], 20.00th=[ 3654], 00:12:10.506 | 30.00th=[ 3687], 40.00th=[ 3720], 50.00th=[ 3752], 60.00th=[ 3818], 00:12:10.506 | 70.00th=[ 3884], 80.00th=[ 4293], 90.00th=[ 4490], 95.00th=[ 4686], 00:12:10.506 | 99.00th=[ 5342], 99.50th=[ 6325], 99.90th=[ 8094], 99.95th=[ 8160], 00:12:10.506 | 99.99th=[ 8291] 00:12:10.506 bw ( KiB/s): min=60696, max=67464, per=97.53%, avg=63565.33, stdev=3499.44, samples=3 00:12:10.506 iops : min=15174, max=16866, avg=15891.33, stdev=874.86, samples=3 00:12:10.506 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:12:10.506 lat (msec) : 2=0.05%, 4=74.87%, 10=25.04% 00:12:10.506 cpu : usr=99.00%, sys=0.10%, ctx=14, majf=0, minf=606 00:12:10.506 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:10.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.506 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:10.506 issued rwts: total=32530,32604,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:10.506 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:10.506 00:12:10.506 Run status group 0 (all jobs): 00:12:10.506 READ: bw=63.5MiB/s (66.6MB/s), 63.5MiB/s-63.5MiB/s (66.6MB/s-66.6MB/s), io=127MiB (133MB), run=2001-2001msec 00:12:10.506 WRITE: bw=63.6MiB/s (66.7MB/s), 63.6MiB/s-63.6MiB/s (66.7MB/s-66.7MB/s), io=127MiB (134MB), run=2001-2001msec 00:12:10.506 ----------------------------------------------------- 00:12:10.506 Suppressions used: 00:12:10.506 count bytes template 00:12:10.506 1 32 /usr/src/fio/parse.c 00:12:10.506 1 8 libtcmalloc_minimal.so 00:12:10.506 ----------------------------------------------------- 00:12:10.506 00:12:10.506 04:52:17 -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:10.506 04:52:17 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:10.506 04:52:17 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' 00:12:10.506 04:52:17 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:10.766 04:52:17 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' 00:12:10.766 04:52:17 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:11.025 04:52:18 -- nvme/nvme.sh@41 -- # bs=4096 00:12:11.025 04:52:18 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:12:11.025 04:52:18 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:12:11.025 04:52:18 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:12:11.025 04:52:18 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:11.025 04:52:18 -- common/autotest_common.sh@1318 -- # local sanitizers 00:12:11.025 04:52:18 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:11.025 04:52:18 -- common/autotest_common.sh@1320 -- # shift 00:12:11.025 04:52:18 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:12:11.025 04:52:18 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:12:11.025 04:52:18 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:11.025 04:52:18 -- common/autotest_common.sh@1324 -- # grep libasan 00:12:11.025 04:52:18 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:12:11.025 04:52:18 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:11.025 04:52:18 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:11.025 04:52:18 -- common/autotest_common.sh@1326 -- # break 00:12:11.025 04:52:18 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:11.025 04:52:18 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:12:11.284 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:11.284 fio-3.35 00:12:11.284 Starting 1 thread 00:12:14.570 00:12:14.570 test: (groupid=0, jobs=1): err= 0: pid=66060: Sun May 12 04:52:21 2024 00:12:14.570 read: IOPS=16.6k, BW=64.9MiB/s (68.0MB/s)(130MiB/2001msec) 00:12:14.570 slat (nsec): min=4647, max=59287, avg=5904.47, stdev=1660.04 00:12:14.570 clat (usec): min=286, max=9022, avg=3831.26, stdev=429.60 00:12:14.570 lat (usec): min=292, max=9081, avg=3837.17, stdev=430.17 00:12:14.570 clat percentiles (usec): 00:12:14.570 | 1.00th=[ 3195], 5.00th=[ 3458], 10.00th=[ 3523], 20.00th=[ 3589], 00:12:14.570 | 30.00th=[ 3621], 40.00th=[ 3654], 50.00th=[ 3720], 60.00th=[ 3752], 00:12:14.570 | 70.00th=[ 3851], 80.00th=[ 4178], 90.00th=[ 4293], 95.00th=[ 4424], 00:12:14.570 | 99.00th=[ 4817], 99.50th=[ 6652], 99.90th=[ 7570], 99.95th=[ 7635], 00:12:14.570 | 99.99th=[ 8848] 00:12:14.570 bw ( KiB/s): min=63936, max=70080, per=99.50%, avg=66080.00, stdev=3467.09, samples=3 00:12:14.570 iops : min=15984, max=17520, avg=16520.00, stdev=866.77, samples=3 00:12:14.570 write: IOPS=16.6k, BW=65.0MiB/s (68.1MB/s)(130MiB/2001msec); 0 zone resets 00:12:14.570 slat (nsec): min=4728, max=55329, avg=6067.69, stdev=1768.56 00:12:14.570 clat (usec): min=342, max=8880, avg=3840.59, stdev=426.37 00:12:14.570 lat (usec): min=349, max=8891, avg=3846.66, stdev=426.96 00:12:14.570 clat percentiles (usec): 00:12:14.570 | 1.00th=[ 3195], 5.00th=[ 3458], 10.00th=[ 3523], 20.00th=[ 3589], 00:12:14.570 | 30.00th=[ 3621], 40.00th=[ 3687], 50.00th=[ 3720], 60.00th=[ 3785], 00:12:14.570 | 70.00th=[ 3884], 80.00th=[ 4228], 90.00th=[ 4359], 95.00th=[ 4424], 00:12:14.570 | 99.00th=[ 4817], 99.50th=[ 6521], 99.90th=[ 7635], 99.95th=[ 7767], 00:12:14.570 | 99.99th=[ 8717] 00:12:14.570 bw ( KiB/s): min=63208, max=70280, per=99.21%, avg=66026.67, stdev=3747.93, samples=3 00:12:14.570 iops : min=15802, max=17570, avg=16506.67, stdev=936.98, samples=3 00:12:14.570 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:12:14.570 lat (msec) : 2=0.05%, 4=75.19%, 10=24.73% 00:12:14.570 cpu : usr=99.00%, sys=0.05%, ctx=3, majf=0, minf=607 00:12:14.570 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:14.570 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:14.570 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:14.570 issued rwts: total=33221,33291,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:14.570 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:14.570 00:12:14.570 Run status group 0 (all jobs): 00:12:14.570 READ: bw=64.9MiB/s (68.0MB/s), 64.9MiB/s-64.9MiB/s (68.0MB/s-68.0MB/s), io=130MiB (136MB), run=2001-2001msec 00:12:14.570 WRITE: bw=65.0MiB/s (68.1MB/s), 65.0MiB/s-65.0MiB/s (68.1MB/s-68.1MB/s), io=130MiB (136MB), run=2001-2001msec 00:12:14.570 ----------------------------------------------------- 00:12:14.570 Suppressions used: 00:12:14.570 count bytes template 00:12:14.570 1 32 /usr/src/fio/parse.c 00:12:14.570 1 8 libtcmalloc_minimal.so 00:12:14.570 ----------------------------------------------------- 00:12:14.570 00:12:14.570 04:52:21 -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:14.570 04:52:21 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:14.571 04:52:21 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' 00:12:14.571 04:52:21 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:14.829 04:52:21 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' 00:12:14.829 04:52:21 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:15.089 04:52:22 -- nvme/nvme.sh@41 -- # bs=4096 00:12:15.089 04:52:22 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:12:15.089 04:52:22 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:12:15.089 04:52:22 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:12:15.089 04:52:22 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:15.089 04:52:22 -- common/autotest_common.sh@1318 -- # local sanitizers 00:12:15.089 04:52:22 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:15.089 04:52:22 -- common/autotest_common.sh@1320 -- # shift 00:12:15.089 04:52:22 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:12:15.089 04:52:22 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:12:15.089 04:52:22 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:15.089 04:52:22 -- common/autotest_common.sh@1324 -- # grep libasan 00:12:15.089 04:52:22 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:12:15.089 04:52:22 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:15.089 04:52:22 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:15.089 04:52:22 -- common/autotest_common.sh@1326 -- # break 00:12:15.089 04:52:22 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:15.089 04:52:22 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:12:15.346 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:15.346 fio-3.35 00:12:15.346 Starting 1 thread 00:12:18.630 00:12:18.630 test: (groupid=0, jobs=1): err= 0: pid=66126: Sun May 12 04:52:25 2024 00:12:18.630 read: IOPS=16.1k, BW=62.7MiB/s (65.8MB/s)(125MiB/2001msec) 00:12:18.630 slat (nsec): min=4623, max=54256, avg=6332.07, stdev=1909.92 00:12:18.630 clat (usec): min=320, max=9143, avg=3963.83, stdev=548.01 00:12:18.630 lat (usec): min=327, max=9187, avg=3970.16, stdev=548.81 00:12:18.630 clat percentiles (usec): 00:12:18.630 | 1.00th=[ 3261], 5.00th=[ 3392], 10.00th=[ 3458], 20.00th=[ 3556], 00:12:18.630 | 30.00th=[ 3621], 40.00th=[ 3687], 50.00th=[ 3785], 60.00th=[ 3916], 00:12:18.630 | 70.00th=[ 4228], 80.00th=[ 4424], 90.00th=[ 4621], 95.00th=[ 4752], 00:12:18.630 | 99.00th=[ 5538], 99.50th=[ 6521], 99.90th=[ 8586], 99.95th=[ 8717], 00:12:18.630 | 99.99th=[ 8979] 00:12:18.630 bw ( KiB/s): min=61224, max=65048, per=98.64%, avg=63344.00, stdev=1945.65, samples=3 00:12:18.630 iops : min=15306, max=16262, avg=15836.00, stdev=486.41, samples=3 00:12:18.630 write: IOPS=16.1k, BW=62.8MiB/s (65.9MB/s)(126MiB/2001msec); 0 zone resets 00:12:18.630 slat (nsec): min=4734, max=56289, avg=6497.12, stdev=1930.79 00:12:18.630 clat (usec): min=295, max=9020, avg=3972.19, stdev=531.23 00:12:18.630 lat (usec): min=302, max=9032, avg=3978.69, stdev=532.02 00:12:18.630 clat percentiles (usec): 00:12:18.630 | 1.00th=[ 3294], 5.00th=[ 3425], 10.00th=[ 3490], 20.00th=[ 3556], 00:12:18.630 | 30.00th=[ 3654], 40.00th=[ 3720], 50.00th=[ 3785], 60.00th=[ 3916], 00:12:18.630 | 70.00th=[ 4228], 80.00th=[ 4424], 90.00th=[ 4621], 95.00th=[ 4752], 00:12:18.630 | 99.00th=[ 5407], 99.50th=[ 6325], 99.90th=[ 8225], 99.95th=[ 8586], 00:12:18.630 | 99.99th=[ 8848] 00:12:18.630 bw ( KiB/s): min=60768, max=65464, per=98.02%, avg=63045.33, stdev=2351.19, samples=3 00:12:18.630 iops : min=15192, max=16366, avg=15761.33, stdev=587.80, samples=3 00:12:18.630 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:12:18.630 lat (msec) : 2=0.05%, 4=62.26%, 10=37.66% 00:12:18.630 cpu : usr=99.00%, sys=0.10%, ctx=3, majf=0, minf=606 00:12:18.630 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:18.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:18.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:18.630 issued rwts: total=32125,32174,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:18.630 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:18.630 00:12:18.630 Run status group 0 (all jobs): 00:12:18.630 READ: bw=62.7MiB/s (65.8MB/s), 62.7MiB/s-62.7MiB/s (65.8MB/s-65.8MB/s), io=125MiB (132MB), run=2001-2001msec 00:12:18.630 WRITE: bw=62.8MiB/s (65.9MB/s), 62.8MiB/s-62.8MiB/s (65.9MB/s-65.9MB/s), io=126MiB (132MB), run=2001-2001msec 00:12:18.889 ----------------------------------------------------- 00:12:18.889 Suppressions used: 00:12:18.889 count bytes template 00:12:18.889 1 32 /usr/src/fio/parse.c 00:12:18.889 1 8 libtcmalloc_minimal.so 00:12:18.889 ----------------------------------------------------- 00:12:18.889 00:12:18.889 04:52:25 -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:18.889 04:52:25 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:18.889 04:52:25 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' 00:12:18.889 04:52:25 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:19.148 04:52:26 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' 00:12:19.148 04:52:26 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:19.407 04:52:26 -- nvme/nvme.sh@41 -- # bs=4096 00:12:19.407 04:52:26 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:12:19.407 04:52:26 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:12:19.407 04:52:26 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:12:19.407 04:52:26 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:19.407 04:52:26 -- common/autotest_common.sh@1318 -- # local sanitizers 00:12:19.407 04:52:26 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:19.407 04:52:26 -- common/autotest_common.sh@1320 -- # shift 00:12:19.407 04:52:26 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:12:19.407 04:52:26 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:12:19.407 04:52:26 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:19.407 04:52:26 -- common/autotest_common.sh@1324 -- # grep libasan 00:12:19.407 04:52:26 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:12:19.407 04:52:26 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:19.407 04:52:26 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:19.407 04:52:26 -- common/autotest_common.sh@1326 -- # break 00:12:19.407 04:52:26 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:19.407 04:52:26 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:12:19.667 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:19.667 fio-3.35 00:12:19.667 Starting 1 thread 00:12:23.857 00:12:23.857 test: (groupid=0, jobs=1): err= 0: pid=66191: Sun May 12 04:52:30 2024 00:12:23.857 read: IOPS=16.8k, BW=65.7MiB/s (68.9MB/s)(131MiB/2001msec) 00:12:23.857 slat (nsec): min=4644, max=57855, avg=5967.73, stdev=1519.94 00:12:23.857 clat (usec): min=335, max=9465, avg=3781.05, stdev=417.54 00:12:23.857 lat (usec): min=339, max=9523, avg=3787.02, stdev=418.22 00:12:23.857 clat percentiles (usec): 00:12:23.857 | 1.00th=[ 3294], 5.00th=[ 3392], 10.00th=[ 3425], 20.00th=[ 3490], 00:12:23.857 | 30.00th=[ 3523], 40.00th=[ 3589], 50.00th=[ 3621], 60.00th=[ 3687], 00:12:23.857 | 70.00th=[ 4015], 80.00th=[ 4228], 90.00th=[ 4293], 95.00th=[ 4359], 00:12:23.857 | 99.00th=[ 4490], 99.50th=[ 5014], 99.90th=[ 7635], 99.95th=[ 7898], 00:12:23.857 | 99.99th=[ 9241] 00:12:23.857 bw ( KiB/s): min=66496, max=71528, per=100.00%, avg=68688.00, stdev=2577.83, samples=3 00:12:23.857 iops : min=16624, max=17882, avg=17172.00, stdev=644.46, samples=3 00:12:23.857 write: IOPS=16.8k, BW=65.8MiB/s (69.0MB/s)(132MiB/2001msec); 0 zone resets 00:12:23.857 slat (nsec): min=4770, max=74634, avg=6097.38, stdev=1487.17 00:12:23.857 clat (usec): min=224, max=9296, avg=3796.44, stdev=425.18 00:12:23.857 lat (usec): min=230, max=9308, avg=3802.53, stdev=425.79 00:12:23.857 clat percentiles (usec): 00:12:23.857 | 1.00th=[ 3294], 5.00th=[ 3392], 10.00th=[ 3458], 20.00th=[ 3490], 00:12:23.857 | 30.00th=[ 3556], 40.00th=[ 3589], 50.00th=[ 3621], 60.00th=[ 3720], 00:12:23.857 | 70.00th=[ 4080], 80.00th=[ 4228], 90.00th=[ 4293], 95.00th=[ 4359], 00:12:23.857 | 99.00th=[ 4555], 99.50th=[ 5473], 99.90th=[ 7635], 99.95th=[ 8094], 00:12:23.857 | 99.99th=[ 9110] 00:12:23.857 bw ( KiB/s): min=66712, max=71280, per=100.00%, avg=68546.67, stdev=2412.96, samples=3 00:12:23.857 iops : min=16678, max=17820, avg=17136.67, stdev=603.24, samples=3 00:12:23.857 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:12:23.857 lat (msec) : 2=0.05%, 4=69.28%, 10=30.63% 00:12:23.857 cpu : usr=99.05%, sys=0.20%, ctx=3, majf=0, minf=604 00:12:23.857 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:23.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.857 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:23.857 issued rwts: total=33643,33694,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:23.857 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:23.857 00:12:23.857 Run status group 0 (all jobs): 00:12:23.857 READ: bw=65.7MiB/s (68.9MB/s), 65.7MiB/s-65.7MiB/s (68.9MB/s-68.9MB/s), io=131MiB (138MB), run=2001-2001msec 00:12:23.857 WRITE: bw=65.8MiB/s (69.0MB/s), 65.8MiB/s-65.8MiB/s (69.0MB/s-69.0MB/s), io=132MiB (138MB), run=2001-2001msec 00:12:24.116 ----------------------------------------------------- 00:12:24.116 Suppressions used: 00:12:24.116 count bytes template 00:12:24.116 1 32 /usr/src/fio/parse.c 00:12:24.116 1 8 libtcmalloc_minimal.so 00:12:24.116 ----------------------------------------------------- 00:12:24.116 00:12:24.116 04:52:31 -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:24.116 04:52:31 -- nvme/nvme.sh@46 -- # true 00:12:24.116 00:12:24.116 real 0m17.727s 00:12:24.116 user 0m14.482s 00:12:24.116 sys 0m1.668s 00:12:24.116 04:52:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:24.116 ************************************ 00:12:24.116 END TEST nvme_fio 00:12:24.116 ************************************ 00:12:24.116 04:52:31 -- common/autotest_common.sh@10 -- # set +x 00:12:24.116 ************************************ 00:12:24.116 END TEST nvme 00:12:24.116 ************************************ 00:12:24.116 00:12:24.116 real 1m36.056s 00:12:24.116 user 3m49.383s 00:12:24.116 sys 0m14.051s 00:12:24.116 04:52:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:24.116 04:52:31 -- common/autotest_common.sh@10 -- # set +x 00:12:24.116 04:52:31 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:12:24.116 04:52:31 -- spdk/autotest.sh@227 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:12:24.116 04:52:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:24.116 04:52:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:24.116 04:52:31 -- common/autotest_common.sh@10 -- # set +x 00:12:24.116 ************************************ 00:12:24.116 START TEST nvme_scc 00:12:24.116 ************************************ 00:12:24.116 04:52:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:12:24.116 * Looking for test storage... 00:12:24.116 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:24.116 04:52:31 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:24.116 04:52:31 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:24.116 04:52:31 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:12:24.116 04:52:31 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:12:24.116 04:52:31 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:24.116 04:52:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:24.116 04:52:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:24.116 04:52:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:24.116 04:52:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.116 04:52:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.116 04:52:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.116 04:52:31 -- paths/export.sh@5 -- # export PATH 00:12:24.116 04:52:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.116 04:52:31 -- nvme/functions.sh@10 -- # ctrls=() 00:12:24.116 04:52:31 -- nvme/functions.sh@10 -- # declare -A ctrls 00:12:24.116 04:52:31 -- nvme/functions.sh@11 -- # nvmes=() 00:12:24.116 04:52:31 -- nvme/functions.sh@11 -- # declare -A nvmes 00:12:24.116 04:52:31 -- nvme/functions.sh@12 -- # bdfs=() 00:12:24.116 04:52:31 -- nvme/functions.sh@12 -- # declare -A bdfs 00:12:24.116 04:52:31 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:12:24.116 04:52:31 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:12:24.116 04:52:31 -- nvme/functions.sh@14 -- # nvme_name= 00:12:24.116 04:52:31 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:24.116 04:52:31 -- nvme/nvme_scc.sh@12 -- # uname 00:12:24.116 04:52:31 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:12:24.116 04:52:31 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:12:24.116 04:52:31 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:24.683 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:24.683 Waiting for block devices as requested 00:12:24.683 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:12:24.941 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:12:24.941 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:12:25.200 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:12:30.545 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:12:30.545 04:52:37 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:12:30.545 04:52:37 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:12:30.545 04:52:37 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:30.546 04:52:37 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:12:30.546 04:52:37 -- nvme/functions.sh@49 -- # pci=0000:00:09.0 00:12:30.546 04:52:37 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:09.0 00:12:30.546 04:52:37 -- scripts/common.sh@15 -- # local i 00:12:30.546 04:52:37 -- scripts/common.sh@18 -- # [[ =~ 0000:00:09.0 ]] 00:12:30.546 04:52:37 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:30.546 04:52:37 -- scripts/common.sh@24 -- # return 0 00:12:30.546 04:52:37 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:12:30.546 04:52:37 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:12:30.546 04:52:37 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:12:30.546 04:52:37 -- nvme/functions.sh@18 -- # shift 00:12:30.546 04:52:37 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.546 04:52:37 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.546 04:52:37 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.546 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.546 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.546 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12343 "' 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # nvme0[sn]='12343 ' 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.546 04:52:37 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.546 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.546 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.546 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.546 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0x2"' 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # nvme0[cmic]=0x2 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.546 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.546 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.546 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.546 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.546 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.546 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.546 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x88010"' 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x88010 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.546 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.546 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.546 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.546 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.546 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.546 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.546 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.546 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.546 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.546 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.546 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.546 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.546 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.546 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.546 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.546 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.546 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.546 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.547 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.547 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.547 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.547 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.547 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.547 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.547 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.547 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.547 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.547 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.547 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.547 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.547 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.547 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.547 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.547 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.547 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.547 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.547 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.547 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.547 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.547 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="1"' 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=1 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.547 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.547 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.547 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.547 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.547 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.547 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.547 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.547 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.547 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.547 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.547 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.547 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.547 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:12:30.547 04:52:37 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.547 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.547 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.548 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.548 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.548 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.548 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.548 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.548 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.548 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.548 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.548 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.548 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.548 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.548 04:52:37 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.548 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.548 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.548 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.548 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.548 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.548 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.548 04:52:37 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.548 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.548 04:52:37 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.548 04:52:37 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:12:30.548 04:52:37 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:12:30.548 04:52:37 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:12:30.548 04:52:37 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:09.0 00:12:30.548 04:52:37 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:12:30.548 04:52:37 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:30.548 04:52:37 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:12:30.548 04:52:37 -- nvme/functions.sh@49 -- # pci=0000:00:08.0 00:12:30.548 04:52:37 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:08.0 00:12:30.548 04:52:37 -- scripts/common.sh@15 -- # local i 00:12:30.548 04:52:37 -- scripts/common.sh@18 -- # [[ =~ 0000:00:08.0 ]] 00:12:30.548 04:52:37 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:30.548 04:52:37 -- scripts/common.sh@24 -- # return 0 00:12:30.548 04:52:37 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:12:30.548 04:52:37 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:12:30.548 04:52:37 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:12:30.548 04:52:37 -- nvme/functions.sh@18 -- # shift 00:12:30.548 04:52:37 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.548 04:52:37 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:12:30.548 04:52:37 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.548 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.548 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.548 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12342 "' 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # nvme1[sn]='12342 ' 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.548 04:52:37 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.548 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.548 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.548 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.548 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.548 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:12:30.548 04:52:37 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.549 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.549 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.549 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.549 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.549 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.549 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.549 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.549 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.549 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.549 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.549 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.549 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.549 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.549 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.549 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.549 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.549 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.549 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.549 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.549 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.549 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.549 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.549 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.549 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.549 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.549 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.549 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.549 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.549 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.549 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.549 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.549 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.549 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.549 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.549 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:12:30.549 04:52:37 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:12:30.549 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.550 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.550 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.550 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:12:30.550 04:52:37 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:12:30.550 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.550 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.550 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.550 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:12:30.550 04:52:37 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:12:30.550 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.550 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.550 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.550 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:12:30.550 04:52:37 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:12:30.550 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.550 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.550 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.550 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:12:30.550 04:52:37 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:12:30.550 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.550 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.550 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.550 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:12:30.550 04:52:37 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:12:30.550 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.550 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.550 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.550 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:12:30.550 04:52:37 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:12:30.550 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.550 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.550 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.550 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:12:30.550 04:52:37 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:12:30.550 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.550 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.550 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.550 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:12:30.550 04:52:37 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:12:30.550 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.550 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.550 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.550 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:12:30.550 04:52:37 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:12:30.550 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.550 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.550 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.550 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:12:30.550 04:52:37 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:12:30.550 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.550 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.550 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.550 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:12:30.550 04:52:37 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:12:30.550 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.550 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.550 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.550 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:12:30.550 04:52:37 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:12:30.550 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.550 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.550 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.550 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:12:30.550 04:52:37 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:12:30.550 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.550 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.550 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.550 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:12:30.550 04:52:37 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:12:30.550 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.550 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.550 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.550 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:12:30.550 04:52:37 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:12:30.550 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.550 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.550 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.550 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:12:30.550 04:52:37 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:12:30.550 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.550 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.550 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.550 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:12:30.550 04:52:37 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:12:30.550 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.550 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.550 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:30.550 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:12:30.550 04:52:37 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:12:30.550 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.550 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.550 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:30.550 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:12:30.550 04:52:37 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:12:30.550 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.550 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.550 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.550 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:12:30.550 04:52:37 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:12:30.550 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.550 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.550 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:30.550 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:12:30.550 04:52:37 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:12:30.550 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.550 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.550 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:30.550 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:12:30.550 04:52:37 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:12:30.550 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.550 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.550 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.550 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:12:30.550 04:52:37 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:12:30.550 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.550 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.550 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.550 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:12:30.550 04:52:37 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:12:30.550 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.550 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.550 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:30.550 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:12:30.550 04:52:37 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:12:30.550 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.550 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.550 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.550 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.551 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.551 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.551 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.551 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.551 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.551 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.551 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.551 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.551 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.551 04:52:37 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12342"' 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12342 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.551 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.551 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.551 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.551 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.551 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.551 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.551 04:52:37 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.551 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.551 04:52:37 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.551 04:52:37 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:12:30.551 04:52:37 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:30.551 04:52:37 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:12:30.551 04:52:37 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:12:30.551 04:52:37 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:12:30.551 04:52:37 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:12:30.551 04:52:37 -- nvme/functions.sh@18 -- # shift 00:12:30.551 04:52:37 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.551 04:52:37 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:12:30.551 04:52:37 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.551 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x100000"' 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x100000 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.551 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x100000"' 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x100000 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.551 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x100000"' 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x100000 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.551 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.551 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.551 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x4"' 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x4 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.551 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.551 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.551 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.551 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.551 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.551 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.551 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.551 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:30.551 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.552 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.552 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.552 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.552 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.552 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.552 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.552 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.552 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.552 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.552 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.552 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.552 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.552 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.552 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.552 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.552 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.552 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.552 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.552 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.552 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.552 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.552 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.552 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.552 04:52:37 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.552 04:52:37 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.552 04:52:37 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.552 04:52:37 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.552 04:52:37 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.552 04:52:37 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.552 04:52:37 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.552 04:52:37 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:30.552 04:52:37 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.552 04:52:37 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:12:30.552 04:52:37 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:30.552 04:52:37 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:12:30.552 04:52:37 -- nvme/functions.sh@56 -- # ns_dev=nvme1n2 00:12:30.552 04:52:37 -- nvme/functions.sh@57 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:12:30.552 04:52:37 -- nvme/functions.sh@17 -- # local ref=nvme1n2 reg val 00:12:30.552 04:52:37 -- nvme/functions.sh@18 -- # shift 00:12:30.552 04:52:37 -- nvme/functions.sh@20 -- # local -gA 'nvme1n2=()' 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.552 04:52:37 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n2 00:12:30.552 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.552 04:52:37 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.553 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsze]="0x100000"' 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # nvme1n2[nsze]=0x100000 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.553 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n2[ncap]="0x100000"' 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # nvme1n2[ncap]=0x100000 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.553 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nuse]="0x100000"' 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # nvme1n2[nuse]=0x100000 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.553 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # nvme1n2[nsfeat]=0x14 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.553 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nlbaf]="7"' 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # nvme1n2[nlbaf]=7 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.553 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n2[flbas]="0x4"' 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # nvme1n2[flbas]=0x4 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.553 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mc]="0x3"' 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # nvme1n2[mc]=0x3 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.553 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dpc]="0x1f"' 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # nvme1n2[dpc]=0x1f 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.553 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dps]="0"' 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # nvme1n2[dps]=0 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.553 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nmic]="0"' 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # nvme1n2[nmic]=0 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.553 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n2[rescap]="0"' 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # nvme1n2[rescap]=0 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.553 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n2[fpi]="0"' 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # nvme1n2[fpi]=0 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.553 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dlfeat]="1"' 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # nvme1n2[dlfeat]=1 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.553 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawun]="0"' 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # nvme1n2[nawun]=0 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.553 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawupf]="0"' 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # nvme1n2[nawupf]=0 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.553 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nacwu]="0"' 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # nvme1n2[nacwu]=0 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.553 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabsn]="0"' 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # nvme1n2[nabsn]=0 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.553 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabo]="0"' 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # nvme1n2[nabo]=0 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.553 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabspf]="0"' 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # nvme1n2[nabspf]=0 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.553 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n2[noiob]="0"' 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # nvme1n2[noiob]=0 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.553 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmcap]="0"' 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # nvme1n2[nvmcap]=0 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.553 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwg]="0"' 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # nvme1n2[npwg]=0 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.553 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwa]="0"' 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # nvme1n2[npwa]=0 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.553 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npdg]="0"' 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # nvme1n2[npdg]=0 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.553 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npda]="0"' 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # nvme1n2[npda]=0 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.553 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nows]="0"' 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # nvme1n2[nows]=0 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.553 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mssrl]="128"' 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # nvme1n2[mssrl]=128 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.553 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mcl]="128"' 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # nvme1n2[mcl]=128 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.553 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n2[msrc]="127"' 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # nvme1n2[msrc]=127 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.553 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nulbaf]="0"' 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # nvme1n2[nulbaf]=0 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.553 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n2[anagrpid]="0"' 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # nvme1n2[anagrpid]=0 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.553 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsattr]="0"' 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # nvme1n2[nsattr]=0 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.553 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmsetid]="0"' 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # nvme1n2[nvmsetid]=0 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.553 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n2[endgid]="0"' 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # nvme1n2[endgid]=0 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.553 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.553 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:12:30.553 04:52:37 -- nvme/functions.sh@23 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.554 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # nvme1n2[eui64]=0000000000000000 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.554 04:52:37 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.554 04:52:37 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.554 04:52:37 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.554 04:52:37 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.554 04:52:37 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.554 04:52:37 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.554 04:52:37 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.554 04:52:37 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.554 04:52:37 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:12:30.554 04:52:37 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:30.554 04:52:37 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:12:30.554 04:52:37 -- nvme/functions.sh@56 -- # ns_dev=nvme1n3 00:12:30.554 04:52:37 -- nvme/functions.sh@57 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:12:30.554 04:52:37 -- nvme/functions.sh@17 -- # local ref=nvme1n3 reg val 00:12:30.554 04:52:37 -- nvme/functions.sh@18 -- # shift 00:12:30.554 04:52:37 -- nvme/functions.sh@20 -- # local -gA 'nvme1n3=()' 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.554 04:52:37 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n3 00:12:30.554 04:52:37 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.554 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsze]="0x100000"' 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # nvme1n3[nsze]=0x100000 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.554 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n3[ncap]="0x100000"' 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # nvme1n3[ncap]=0x100000 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.554 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nuse]="0x100000"' 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # nvme1n3[nuse]=0x100000 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.554 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # nvme1n3[nsfeat]=0x14 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.554 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nlbaf]="7"' 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # nvme1n3[nlbaf]=7 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.554 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n3[flbas]="0x4"' 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # nvme1n3[flbas]=0x4 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.554 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mc]="0x3"' 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # nvme1n3[mc]=0x3 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.554 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dpc]="0x1f"' 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # nvme1n3[dpc]=0x1f 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.554 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dps]="0"' 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # nvme1n3[dps]=0 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.554 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nmic]="0"' 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # nvme1n3[nmic]=0 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.554 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n3[rescap]="0"' 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # nvme1n3[rescap]=0 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.554 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n3[fpi]="0"' 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # nvme1n3[fpi]=0 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.554 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dlfeat]="1"' 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # nvme1n3[dlfeat]=1 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.554 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawun]="0"' 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # nvme1n3[nawun]=0 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.554 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawupf]="0"' 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # nvme1n3[nawupf]=0 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.554 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nacwu]="0"' 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # nvme1n3[nacwu]=0 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.554 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabsn]="0"' 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # nvme1n3[nabsn]=0 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.554 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabo]="0"' 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # nvme1n3[nabo]=0 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.554 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabspf]="0"' 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # nvme1n3[nabspf]=0 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.554 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n3[noiob]="0"' 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # nvme1n3[noiob]=0 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.554 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmcap]="0"' 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # nvme1n3[nvmcap]=0 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.554 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwg]="0"' 00:12:30.554 04:52:37 -- nvme/functions.sh@23 -- # nvme1n3[npwg]=0 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.554 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.554 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.555 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwa]="0"' 00:12:30.555 04:52:37 -- nvme/functions.sh@23 -- # nvme1n3[npwa]=0 00:12:30.555 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.555 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.555 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.555 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npdg]="0"' 00:12:30.555 04:52:37 -- nvme/functions.sh@23 -- # nvme1n3[npdg]=0 00:12:30.555 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.555 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.555 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.555 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npda]="0"' 00:12:30.555 04:52:37 -- nvme/functions.sh@23 -- # nvme1n3[npda]=0 00:12:30.555 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.555 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.555 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.555 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nows]="0"' 00:12:30.555 04:52:37 -- nvme/functions.sh@23 -- # nvme1n3[nows]=0 00:12:30.555 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.555 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.555 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:30.555 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mssrl]="128"' 00:12:30.555 04:52:37 -- nvme/functions.sh@23 -- # nvme1n3[mssrl]=128 00:12:30.555 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.555 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.555 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:30.555 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mcl]="128"' 00:12:30.555 04:52:37 -- nvme/functions.sh@23 -- # nvme1n3[mcl]=128 00:12:30.555 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.555 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.555 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:30.555 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n3[msrc]="127"' 00:12:30.555 04:52:37 -- nvme/functions.sh@23 -- # nvme1n3[msrc]=127 00:12:30.555 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.555 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.555 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.555 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nulbaf]="0"' 00:12:30.555 04:52:37 -- nvme/functions.sh@23 -- # nvme1n3[nulbaf]=0 00:12:30.555 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.555 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.555 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.555 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n3[anagrpid]="0"' 00:12:30.555 04:52:37 -- nvme/functions.sh@23 -- # nvme1n3[anagrpid]=0 00:12:30.555 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.555 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.555 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.555 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsattr]="0"' 00:12:30.555 04:52:37 -- nvme/functions.sh@23 -- # nvme1n3[nsattr]=0 00:12:30.555 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.555 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.555 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.555 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmsetid]="0"' 00:12:30.555 04:52:37 -- nvme/functions.sh@23 -- # nvme1n3[nvmsetid]=0 00:12:30.555 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.555 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.555 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.555 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n3[endgid]="0"' 00:12:30.555 04:52:37 -- nvme/functions.sh@23 -- # nvme1n3[endgid]=0 00:12:30.555 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.555 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.555 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:30.555 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:12:30.555 04:52:37 -- nvme/functions.sh@23 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:12:30.555 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.555 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.555 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:30.555 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:12:30.555 04:52:37 -- nvme/functions.sh@23 -- # nvme1n3[eui64]=0000000000000000 00:12:30.555 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.555 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.555 04:52:37 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:30.555 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:30.555 04:52:37 -- nvme/functions.sh@23 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:30.555 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.555 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.555 04:52:37 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:30.555 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:30.555 04:52:37 -- nvme/functions.sh@23 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:30.555 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.555 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.555 04:52:37 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:30.555 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:30.555 04:52:37 -- nvme/functions.sh@23 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:30.555 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.555 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.555 04:52:37 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:30.555 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:30.555 04:52:37 -- nvme/functions.sh@23 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:30.555 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.555 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.555 04:52:37 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:30.555 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:30.555 04:52:37 -- nvme/functions.sh@23 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:30.555 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.555 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.555 04:52:37 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:30.555 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:30.555 04:52:37 -- nvme/functions.sh@23 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:30.555 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.555 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.555 04:52:37 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:30.555 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:30.555 04:52:37 -- nvme/functions.sh@23 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:30.555 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.555 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.555 04:52:37 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:30.555 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:30.555 04:52:37 -- nvme/functions.sh@23 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:30.555 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.555 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.555 04:52:37 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:12:30.555 04:52:37 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:12:30.555 04:52:37 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:12:30.555 04:52:37 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:08.0 00:12:30.555 04:52:37 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:12:30.555 04:52:37 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:30.555 04:52:37 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:12:30.555 04:52:37 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:12:30.555 04:52:37 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:12:30.555 04:52:37 -- scripts/common.sh@15 -- # local i 00:12:30.555 04:52:37 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:12:30.555 04:52:37 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:30.555 04:52:37 -- scripts/common.sh@24 -- # return 0 00:12:30.555 04:52:37 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:12:30.555 04:52:37 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:12:30.555 04:52:37 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:12:30.555 04:52:37 -- nvme/functions.sh@18 -- # shift 00:12:30.555 04:52:37 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:12:30.555 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.555 04:52:37 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:12:30.555 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.555 04:52:37 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:30.555 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.555 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.555 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:30.555 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:12:30.555 04:52:37 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:12:30.555 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.555 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.555 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:30.555 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:12:30.555 04:52:37 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:12:30.555 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.555 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.556 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12340 "' 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # nvme2[sn]='12340 ' 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.556 04:52:37 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.556 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.556 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.556 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.556 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.556 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.556 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.556 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.556 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.556 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.556 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.556 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.556 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.556 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.556 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.556 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.556 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.556 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.556 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.556 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.556 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.556 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.556 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.556 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.556 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.556 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.556 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.556 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.556 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.556 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.556 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.556 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.556 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.556 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.556 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:12:30.556 04:52:37 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.557 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.557 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.557 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.557 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.557 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.557 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.557 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.557 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.557 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.557 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.557 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.557 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.557 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.557 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.557 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.557 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.557 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.557 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.557 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.557 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.557 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.557 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.557 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.557 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.557 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.557 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.557 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.557 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.557 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.557 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.557 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.557 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.557 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.557 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.557 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.557 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.557 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:12:30.557 04:52:37 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.558 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.558 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.558 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.558 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.558 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.558 04:52:37 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12340"' 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12340 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.558 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.558 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.558 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.558 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.558 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.558 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.558 04:52:37 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.558 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.558 04:52:37 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.558 04:52:37 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:12:30.558 04:52:37 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:30.558 04:52:37 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:12:30.558 04:52:37 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:12:30.558 04:52:37 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:12:30.558 04:52:37 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:12:30.558 04:52:37 -- nvme/functions.sh@18 -- # shift 00:12:30.558 04:52:37 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.558 04:52:37 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.558 04:52:37 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.558 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x17a17a"' 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x17a17a 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.558 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x17a17a"' 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x17a17a 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.558 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x17a17a"' 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x17a17a 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.558 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.558 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.558 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x7"' 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x7 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.558 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.558 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.558 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.558 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.558 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.558 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.558 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.558 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.558 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.558 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.558 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.558 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:12:30.558 04:52:37 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.559 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.559 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.559 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.559 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.559 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.559 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.559 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.559 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.559 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.559 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.559 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.559 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.559 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.559 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.559 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.559 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.559 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.559 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.559 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.559 04:52:37 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.559 04:52:37 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.559 04:52:37 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.559 04:52:37 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.559 04:52:37 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.559 04:52:37 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.559 04:52:37 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.559 04:52:37 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:12:30.559 04:52:37 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.559 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.559 04:52:37 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:12:30.559 04:52:37 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:12:30.559 04:52:37 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:12:30.559 04:52:37 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:12:30.559 04:52:37 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:12:30.559 04:52:37 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:30.559 04:52:37 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:12:30.559 04:52:37 -- nvme/functions.sh@49 -- # pci=0000:00:07.0 00:12:30.559 04:52:37 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:07.0 00:12:30.559 04:52:37 -- scripts/common.sh@15 -- # local i 00:12:30.559 04:52:37 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:12:30.559 04:52:37 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:30.559 04:52:37 -- scripts/common.sh@24 -- # return 0 00:12:30.559 04:52:37 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:12:30.559 04:52:37 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:12:30.559 04:52:37 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:12:30.559 04:52:37 -- nvme/functions.sh@18 -- # shift 00:12:30.560 04:52:37 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.560 04:52:37 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:12:30.560 04:52:37 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.560 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.560 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.560 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12341 "' 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # nvme3[sn]='12341 ' 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.560 04:52:37 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.560 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.560 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.560 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.560 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0"' 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # nvme3[cmic]=0 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.560 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.560 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.560 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.560 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.560 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.560 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.560 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x8000"' 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x8000 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.560 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.560 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.560 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.560 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.560 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.560 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.560 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.560 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.560 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.560 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.560 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.560 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.560 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.560 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.560 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.560 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.560 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.560 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:12:30.560 04:52:37 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:12:30.561 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.561 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.561 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.561 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:12:30.561 04:52:37 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:12:30.561 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.561 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.561 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:30.561 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:12:30.561 04:52:37 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:12:30.561 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.561 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.561 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:30.561 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:12:30.561 04:52:37 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:12:30.561 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.561 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.561 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.561 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:12:30.561 04:52:37 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:12:30.561 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.561 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.561 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.561 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:12:30.561 04:52:37 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:12:30.561 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.561 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.561 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.561 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:12:30.561 04:52:37 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:12:30.561 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.561 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.561 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.561 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:12:30.561 04:52:37 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:12:30.561 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.561 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.561 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.561 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:12:30.561 04:52:37 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:12:30.561 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.561 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.561 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.561 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:12:30.561 04:52:37 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:12:30.561 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.561 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.561 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.561 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:12:30.561 04:52:37 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:12:30.561 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.561 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.561 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.561 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:12:30.561 04:52:37 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:12:30.561 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.561 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.561 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.561 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:12:30.561 04:52:37 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:12:30.561 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.561 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.561 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.561 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:12:30.561 04:52:37 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:12:30.561 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.561 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.561 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.561 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:12:30.561 04:52:37 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:12:30.561 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.561 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.821 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.821 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.821 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.821 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.821 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.821 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.821 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="0"' 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=0 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.821 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.821 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.821 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.821 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.821 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.821 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.821 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.821 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.821 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.821 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.821 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.821 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.821 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.821 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.821 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.821 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.821 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.821 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.821 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.821 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.821 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.821 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.821 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.821 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.821 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.821 04:52:37 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:12341"' 00:12:30.821 04:52:37 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:12341 00:12:30.821 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.822 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.822 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.822 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.822 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.822 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.822 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.822 04:52:37 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.822 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.822 04:52:37 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.822 04:52:37 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:12:30.822 04:52:37 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:30.822 04:52:37 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme3/nvme3n1 ]] 00:12:30.822 04:52:37 -- nvme/functions.sh@56 -- # ns_dev=nvme3n1 00:12:30.822 04:52:37 -- nvme/functions.sh@57 -- # nvme_get nvme3n1 id-ns /dev/nvme3n1 00:12:30.822 04:52:37 -- nvme/functions.sh@17 -- # local ref=nvme3n1 reg val 00:12:30.822 04:52:37 -- nvme/functions.sh@18 -- # shift 00:12:30.822 04:52:37 -- nvme/functions.sh@20 -- # local -gA 'nvme3n1=()' 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.822 04:52:37 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme3n1 00:12:30.822 04:52:37 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.822 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsze]="0x140000"' 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # nvme3n1[nsze]=0x140000 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.822 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3n1[ncap]="0x140000"' 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # nvme3n1[ncap]=0x140000 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.822 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nuse]="0x140000"' 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # nvme3n1[nuse]=0x140000 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.822 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsfeat]="0x14"' 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # nvme3n1[nsfeat]=0x14 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.822 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nlbaf]="7"' 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # nvme3n1[nlbaf]=7 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.822 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3n1[flbas]="0x4"' 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # nvme3n1[flbas]=0x4 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.822 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mc]="0x3"' 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # nvme3n1[mc]=0x3 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.822 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dpc]="0x1f"' 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # nvme3n1[dpc]=0x1f 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.822 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dps]="0"' 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # nvme3n1[dps]=0 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.822 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nmic]="0"' 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # nvme3n1[nmic]=0 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.822 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3n1[rescap]="0"' 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # nvme3n1[rescap]=0 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.822 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3n1[fpi]="0"' 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # nvme3n1[fpi]=0 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.822 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dlfeat]="1"' 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # nvme3n1[dlfeat]=1 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.822 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawun]="0"' 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # nvme3n1[nawun]=0 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.822 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawupf]="0"' 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # nvme3n1[nawupf]=0 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.822 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nacwu]="0"' 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # nvme3n1[nacwu]=0 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.822 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabsn]="0"' 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # nvme3n1[nabsn]=0 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.822 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabo]="0"' 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # nvme3n1[nabo]=0 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.822 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabspf]="0"' 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # nvme3n1[nabspf]=0 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.822 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3n1[noiob]="0"' 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # nvme3n1[noiob]=0 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.822 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmcap]="0"' 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # nvme3n1[nvmcap]=0 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.822 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwg]="0"' 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # nvme3n1[npwg]=0 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.822 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwa]="0"' 00:12:30.822 04:52:37 -- nvme/functions.sh@23 -- # nvme3n1[npwa]=0 00:12:30.822 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.823 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.823 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.823 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npdg]="0"' 00:12:30.823 04:52:37 -- nvme/functions.sh@23 -- # nvme3n1[npdg]=0 00:12:30.823 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.823 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.823 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.823 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npda]="0"' 00:12:30.823 04:52:37 -- nvme/functions.sh@23 -- # nvme3n1[npda]=0 00:12:30.823 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.823 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.823 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.823 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nows]="0"' 00:12:30.823 04:52:37 -- nvme/functions.sh@23 -- # nvme3n1[nows]=0 00:12:30.823 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.823 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.823 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:30.823 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mssrl]="128"' 00:12:30.823 04:52:37 -- nvme/functions.sh@23 -- # nvme3n1[mssrl]=128 00:12:30.823 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.823 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.823 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:30.823 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mcl]="128"' 00:12:30.823 04:52:37 -- nvme/functions.sh@23 -- # nvme3n1[mcl]=128 00:12:30.823 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.823 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.823 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:30.823 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3n1[msrc]="127"' 00:12:30.823 04:52:37 -- nvme/functions.sh@23 -- # nvme3n1[msrc]=127 00:12:30.823 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.823 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.823 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.823 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nulbaf]="0"' 00:12:30.823 04:52:37 -- nvme/functions.sh@23 -- # nvme3n1[nulbaf]=0 00:12:30.823 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.823 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.823 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.823 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3n1[anagrpid]="0"' 00:12:30.823 04:52:37 -- nvme/functions.sh@23 -- # nvme3n1[anagrpid]=0 00:12:30.823 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.823 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.823 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.823 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsattr]="0"' 00:12:30.823 04:52:37 -- nvme/functions.sh@23 -- # nvme3n1[nsattr]=0 00:12:30.823 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.823 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.823 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.823 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmsetid]="0"' 00:12:30.823 04:52:37 -- nvme/functions.sh@23 -- # nvme3n1[nvmsetid]=0 00:12:30.823 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.823 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.823 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.823 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3n1[endgid]="0"' 00:12:30.823 04:52:37 -- nvme/functions.sh@23 -- # nvme3n1[endgid]=0 00:12:30.823 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.823 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.823 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:30.823 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nguid]="00000000000000000000000000000000"' 00:12:30.823 04:52:37 -- nvme/functions.sh@23 -- # nvme3n1[nguid]=00000000000000000000000000000000 00:12:30.823 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.823 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.823 04:52:37 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:30.823 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3n1[eui64]="0000000000000000"' 00:12:30.823 04:52:37 -- nvme/functions.sh@23 -- # nvme3n1[eui64]=0000000000000000 00:12:30.823 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.823 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.823 04:52:37 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:30.823 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:30.823 04:52:37 -- nvme/functions.sh@23 -- # nvme3n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:30.823 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.823 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.823 04:52:37 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:30.823 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:30.823 04:52:37 -- nvme/functions.sh@23 -- # nvme3n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:30.823 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.823 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.823 04:52:37 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:30.823 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:30.823 04:52:37 -- nvme/functions.sh@23 -- # nvme3n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:30.823 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.823 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.823 04:52:37 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:30.823 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:30.823 04:52:37 -- nvme/functions.sh@23 -- # nvme3n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:30.823 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.823 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.823 04:52:37 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:30.823 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:30.823 04:52:37 -- nvme/functions.sh@23 -- # nvme3n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:30.823 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.823 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.823 04:52:37 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:30.823 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:30.823 04:52:37 -- nvme/functions.sh@23 -- # nvme3n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:30.823 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.823 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.823 04:52:37 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:30.823 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:30.823 04:52:37 -- nvme/functions.sh@23 -- # nvme3n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:30.823 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.823 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.823 04:52:37 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:30.823 04:52:37 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:30.823 04:52:37 -- nvme/functions.sh@23 -- # nvme3n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:30.823 04:52:37 -- nvme/functions.sh@21 -- # IFS=: 00:12:30.823 04:52:37 -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.823 04:52:37 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme3n1 00:12:30.823 04:52:37 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:12:30.823 04:52:37 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:12:30.823 04:52:37 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:07.0 00:12:30.823 04:52:37 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:12:30.823 04:52:37 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:12:30.823 04:52:37 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:12:30.823 04:52:37 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:12:30.823 04:52:37 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:12:30.823 04:52:37 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:12:30.823 04:52:37 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:12:30.823 04:52:37 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:12:30.823 04:52:37 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:12:30.823 04:52:37 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:12:30.823 04:52:37 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:30.823 04:52:37 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:12:30.823 04:52:37 -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:12:30.823 04:52:37 -- nvme/functions.sh@184 -- # get_oncs nvme1 00:12:30.823 04:52:37 -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:12:30.823 04:52:37 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:12:30.823 04:52:37 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:12:30.823 04:52:37 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:12:30.823 04:52:37 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:12:30.823 04:52:37 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:30.823 04:52:37 -- nvme/functions.sh@76 -- # echo 0x15d 00:12:30.823 04:52:37 -- nvme/functions.sh@184 -- # oncs=0x15d 00:12:30.823 04:52:37 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:12:30.823 04:52:37 -- nvme/functions.sh@197 -- # echo nvme1 00:12:30.823 04:52:37 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:30.823 04:52:37 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:12:30.823 04:52:37 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:12:30.823 04:52:37 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:12:30.823 04:52:37 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:12:30.823 04:52:37 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:12:30.823 04:52:37 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:12:30.823 04:52:37 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:12:30.823 04:52:37 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:12:30.823 04:52:37 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:30.823 04:52:37 -- nvme/functions.sh@76 -- # echo 0x15d 00:12:30.823 04:52:37 -- nvme/functions.sh@184 -- # oncs=0x15d 00:12:30.823 04:52:37 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:12:30.823 04:52:37 -- nvme/functions.sh@197 -- # echo nvme0 00:12:30.823 04:52:37 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:30.823 04:52:37 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:12:30.823 04:52:37 -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:12:30.823 04:52:37 -- nvme/functions.sh@184 -- # get_oncs nvme3 00:12:30.823 04:52:37 -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:12:30.823 04:52:37 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:12:30.823 04:52:37 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:12:30.823 04:52:37 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:12:30.823 04:52:37 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:12:30.823 04:52:37 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:30.823 04:52:37 -- nvme/functions.sh@76 -- # echo 0x15d 00:12:30.823 04:52:37 -- nvme/functions.sh@184 -- # oncs=0x15d 00:12:30.823 04:52:37 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:12:30.823 04:52:37 -- nvme/functions.sh@197 -- # echo nvme3 00:12:30.823 04:52:37 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:30.823 04:52:37 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:12:30.823 04:52:37 -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:12:30.823 04:52:37 -- nvme/functions.sh@184 -- # get_oncs nvme2 00:12:30.824 04:52:37 -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:12:30.824 04:52:37 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:12:30.824 04:52:37 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:12:30.824 04:52:37 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:12:30.824 04:52:37 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:12:30.824 04:52:37 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:30.824 04:52:37 -- nvme/functions.sh@76 -- # echo 0x15d 00:12:30.824 04:52:37 -- nvme/functions.sh@184 -- # oncs=0x15d 00:12:30.824 04:52:37 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:12:30.824 04:52:37 -- nvme/functions.sh@197 -- # echo nvme2 00:12:30.824 04:52:37 -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:12:30.824 04:52:37 -- nvme/functions.sh@206 -- # echo nvme1 00:12:30.824 04:52:37 -- nvme/functions.sh@207 -- # return 0 00:12:30.824 04:52:37 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:12:30.824 04:52:37 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:08.0 00:12:30.824 04:52:37 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:31.759 lsblk: /dev/nvme0c0n1: not a block device 00:12:31.759 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:32.017 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:12:32.017 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:12:32.017 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:12:32.017 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:12:32.017 04:52:39 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:08.0' 00:12:32.017 04:52:39 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:12:32.017 04:52:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:32.017 04:52:39 -- common/autotest_common.sh@10 -- # set +x 00:12:32.017 ************************************ 00:12:32.017 START TEST nvme_simple_copy 00:12:32.017 ************************************ 00:12:32.017 04:52:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:08.0' 00:12:32.585 Initializing NVMe Controllers 00:12:32.585 Attaching to 0000:00:08.0 00:12:32.585 Controller supports SCC. Attached to 0000:00:08.0 00:12:32.585 Namespace ID: 1 size: 4GB 00:12:32.585 Initialization complete. 00:12:32.585 00:12:32.585 Controller QEMU NVMe Ctrl (12342 ) 00:12:32.585 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:12:32.585 Namespace Block Size:4096 00:12:32.585 Writing LBAs 0 to 63 with Random Data 00:12:32.585 Copied LBAs from 0 - 63 to the Destination LBA 256 00:12:32.585 LBAs matching Written Data: 64 00:12:32.585 00:12:32.585 real 0m0.308s 00:12:32.585 user 0m0.123s 00:12:32.585 sys 0m0.083s 00:12:32.585 ************************************ 00:12:32.585 END TEST nvme_simple_copy 00:12:32.585 ************************************ 00:12:32.585 04:52:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:32.585 04:52:39 -- common/autotest_common.sh@10 -- # set +x 00:12:32.585 ************************************ 00:12:32.585 END TEST nvme_scc 00:12:32.585 ************************************ 00:12:32.585 00:12:32.585 real 0m8.362s 00:12:32.585 user 0m1.454s 00:12:32.585 sys 0m1.798s 00:12:32.585 04:52:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:32.585 04:52:39 -- common/autotest_common.sh@10 -- # set +x 00:12:32.585 04:52:39 -- spdk/autotest.sh@229 -- # [[ 0 -eq 1 ]] 00:12:32.585 04:52:39 -- spdk/autotest.sh@232 -- # [[ 0 -eq 1 ]] 00:12:32.585 04:52:39 -- spdk/autotest.sh@235 -- # [[ '' -eq 1 ]] 00:12:32.585 04:52:39 -- spdk/autotest.sh@238 -- # [[ 1 -eq 1 ]] 00:12:32.585 04:52:39 -- spdk/autotest.sh@239 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:12:32.585 04:52:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:32.585 04:52:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:32.585 04:52:39 -- common/autotest_common.sh@10 -- # set +x 00:12:32.585 ************************************ 00:12:32.585 START TEST nvme_fdp 00:12:32.585 ************************************ 00:12:32.585 04:52:39 -- common/autotest_common.sh@1104 -- # test/nvme/nvme_fdp.sh 00:12:32.585 * Looking for test storage... 00:12:32.585 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:32.585 04:52:39 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:32.585 04:52:39 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:32.585 04:52:39 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:12:32.585 04:52:39 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:12:32.585 04:52:39 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:32.585 04:52:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:32.585 04:52:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:32.585 04:52:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:32.585 04:52:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.585 04:52:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.585 04:52:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.585 04:52:39 -- paths/export.sh@5 -- # export PATH 00:12:32.585 04:52:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.585 04:52:39 -- nvme/functions.sh@10 -- # ctrls=() 00:12:32.585 04:52:39 -- nvme/functions.sh@10 -- # declare -A ctrls 00:12:32.585 04:52:39 -- nvme/functions.sh@11 -- # nvmes=() 00:12:32.585 04:52:39 -- nvme/functions.sh@11 -- # declare -A nvmes 00:12:32.585 04:52:39 -- nvme/functions.sh@12 -- # bdfs=() 00:12:32.585 04:52:39 -- nvme/functions.sh@12 -- # declare -A bdfs 00:12:32.585 04:52:39 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:12:32.585 04:52:39 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:12:32.585 04:52:39 -- nvme/functions.sh@14 -- # nvme_name= 00:12:32.585 04:52:39 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:32.585 04:52:39 -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:33.151 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:33.151 Waiting for block devices as requested 00:12:33.151 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:12:33.410 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:12:33.410 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:12:33.410 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:12:38.696 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:12:38.696 04:52:45 -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:12:38.696 04:52:45 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:12:38.696 04:52:45 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:38.696 04:52:45 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:12:38.696 04:52:45 -- nvme/functions.sh@49 -- # pci=0000:00:09.0 00:12:38.696 04:52:45 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:09.0 00:12:38.696 04:52:45 -- scripts/common.sh@15 -- # local i 00:12:38.696 04:52:45 -- scripts/common.sh@18 -- # [[ =~ 0000:00:09.0 ]] 00:12:38.696 04:52:45 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:38.696 04:52:45 -- scripts/common.sh@24 -- # return 0 00:12:38.696 04:52:45 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:12:38.696 04:52:45 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:12:38.696 04:52:45 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:12:38.696 04:52:45 -- nvme/functions.sh@18 -- # shift 00:12:38.696 04:52:45 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:12:38.696 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.696 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.696 04:52:45 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:12:38.696 04:52:45 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:38.696 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.696 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.696 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:38.696 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:12:38.696 04:52:45 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:12:38.696 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.696 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.696 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:38.696 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:12:38.696 04:52:45 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:12:38.696 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.696 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.696 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:12:38.696 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12343 "' 00:12:38.696 04:52:45 -- nvme/functions.sh@23 -- # nvme0[sn]='12343 ' 00:12:38.696 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.696 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.696 04:52:45 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:38.696 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:12:38.696 04:52:45 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:12:38.696 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.696 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.696 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:38.696 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:12:38.696 04:52:45 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:12:38.696 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.696 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.696 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:38.696 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:12:38.696 04:52:45 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:12:38.696 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.696 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.696 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:38.696 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:12:38.696 04:52:45 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:12:38.696 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.696 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.696 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:12:38.696 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0x2"' 00:12:38.696 04:52:45 -- nvme/functions.sh@23 -- # nvme0[cmic]=0x2 00:12:38.696 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.696 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.696 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:38.696 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:12:38.696 04:52:45 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:12:38.696 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.696 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.696 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.696 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:12:38.696 04:52:45 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:12:38.696 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.696 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.696 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:38.696 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:12:38.696 04:52:45 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:12:38.696 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.696 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.696 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.696 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:12:38.696 04:52:45 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:12:38.696 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.696 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.696 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.696 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:12:38.696 04:52:45 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:12:38.696 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.696 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.696 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:38.696 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:12:38.696 04:52:45 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:12:38.696 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.696 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.696 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:12:38.696 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x88010"' 00:12:38.696 04:52:45 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x88010 00:12:38.696 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.696 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.696 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.696 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:12:38.696 04:52:45 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:12:38.696 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.696 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.696 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:38.696 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:12:38.696 04:52:45 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:12:38.696 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.696 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.696 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:38.696 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:38.696 04:52:45 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:12:38.696 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.696 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.696 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.696 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:12:38.696 04:52:45 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:12:38.696 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.696 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.696 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.696 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:12:38.696 04:52:45 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:12:38.696 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.696 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.696 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.696 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:12:38.696 04:52:45 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:12:38.696 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.696 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.696 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.696 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:12:38.696 04:52:45 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:12:38.696 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.696 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.696 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.696 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:12:38.696 04:52:45 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:12:38.696 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.696 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.696 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.696 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:12:38.696 04:52:45 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.697 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.697 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.697 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.697 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.697 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.697 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.697 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.697 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.697 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.697 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.697 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.697 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.697 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.697 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.697 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.697 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.697 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.697 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.697 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.697 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.697 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.697 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.697 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.697 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.697 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.697 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.697 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.697 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.697 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="1"' 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=1 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.697 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.697 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.697 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.698 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.698 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.698 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.698 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.698 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.698 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.698 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.698 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.698 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.698 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.698 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.698 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.698 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.698 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.698 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.698 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.698 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.698 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.698 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.698 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.698 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.698 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.698 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.698 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.698 04:52:45 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.698 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.698 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.698 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.698 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.698 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.698 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.698 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.698 04:52:45 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:38.698 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:38.699 04:52:45 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.699 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:38.699 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:38.699 04:52:45 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.699 04:52:45 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:38.699 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:12:38.699 04:52:45 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.699 04:52:45 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:12:38.699 04:52:45 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:12:38.699 04:52:45 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:12:38.699 04:52:45 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:09.0 00:12:38.699 04:52:45 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:12:38.699 04:52:45 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:38.699 04:52:45 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:12:38.699 04:52:45 -- nvme/functions.sh@49 -- # pci=0000:00:08.0 00:12:38.699 04:52:45 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:08.0 00:12:38.699 04:52:45 -- scripts/common.sh@15 -- # local i 00:12:38.699 04:52:45 -- scripts/common.sh@18 -- # [[ =~ 0000:00:08.0 ]] 00:12:38.699 04:52:45 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:38.699 04:52:45 -- scripts/common.sh@24 -- # return 0 00:12:38.699 04:52:45 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:12:38.699 04:52:45 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:12:38.699 04:52:45 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:12:38.699 04:52:45 -- nvme/functions.sh@18 -- # shift 00:12:38.699 04:52:45 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.699 04:52:45 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:12:38.699 04:52:45 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.699 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:38.699 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:12:38.699 04:52:45 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.699 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:38.699 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:12:38.699 04:52:45 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.699 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:12:38.699 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12342 "' 00:12:38.699 04:52:45 -- nvme/functions.sh@23 -- # nvme1[sn]='12342 ' 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.699 04:52:45 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:38.699 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:12:38.699 04:52:45 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.699 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:38.699 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:12:38.699 04:52:45 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.699 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:38.699 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:12:38.699 04:52:45 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.699 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:38.699 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:12:38.699 04:52:45 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.699 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.699 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:12:38.699 04:52:45 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.699 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:38.699 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:12:38.699 04:52:45 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.699 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.699 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:12:38.699 04:52:45 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.699 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:38.699 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:12:38.699 04:52:45 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.699 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.699 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:12:38.699 04:52:45 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.699 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.699 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:12:38.699 04:52:45 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.699 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:38.699 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:12:38.699 04:52:45 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.699 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:38.699 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:12:38.699 04:52:45 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.699 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.699 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:12:38.699 04:52:45 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.699 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:38.699 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:12:38.699 04:52:45 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.699 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:38.699 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:38.699 04:52:45 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.699 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.699 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:12:38.699 04:52:45 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.699 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.699 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:12:38.699 04:52:45 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.699 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.699 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:12:38.699 04:52:45 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.699 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.699 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:12:38.699 04:52:45 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.699 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.699 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:12:38.699 04:52:45 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.699 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.699 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:12:38.699 04:52:45 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.699 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.700 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.700 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.700 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.700 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.700 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.700 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.700 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.700 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.700 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.700 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.700 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.700 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.700 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.700 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.700 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.700 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.700 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.700 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.700 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.700 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.700 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.700 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.700 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.700 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.700 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.700 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.700 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.700 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.700 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:12:38.700 04:52:45 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.700 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.700 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.701 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:12:38.701 04:52:45 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:12:38.701 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.701 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.701 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.701 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:12:38.701 04:52:45 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:12:38.701 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.701 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.701 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.701 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:12:38.701 04:52:45 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:12:38.701 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.701 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.701 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.701 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:12:38.701 04:52:45 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:12:38.701 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.701 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.701 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.701 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:12:38.701 04:52:45 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:12:38.701 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.701 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.701 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.701 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:12:38.701 04:52:45 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:12:38.701 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.701 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.701 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.701 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:12:38.701 04:52:45 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:12:38.701 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.701 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.701 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:38.701 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:12:38.701 04:52:45 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:12:38.701 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.701 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.701 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:38.701 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:12:38.701 04:52:45 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:12:38.701 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.701 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.701 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.701 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:12:38.701 04:52:45 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:12:38.701 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.701 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.701 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:38.701 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:12:38.701 04:52:45 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:12:38.701 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.701 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.701 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:38.701 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:12:38.701 04:52:45 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:12:38.701 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.701 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.701 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.701 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:12:38.701 04:52:45 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:12:38.701 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.701 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.701 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.701 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:12:38.701 04:52:45 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:12:38.701 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.701 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.701 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:38.701 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:12:38.701 04:52:45 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:12:38.701 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.701 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.701 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.701 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:12:38.701 04:52:45 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:12:38.701 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.701 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.701 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.701 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:12:38.701 04:52:45 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:12:38.701 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.701 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.701 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.701 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:12:38.701 04:52:45 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:12:38.701 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.701 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.701 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.701 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:12:38.701 04:52:45 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:12:38.701 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.701 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.701 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.701 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:12:38.701 04:52:45 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:12:38.701 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.701 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.701 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:38.701 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:12:38.701 04:52:45 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:12:38.701 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.701 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.701 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:38.701 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:12:38.701 04:52:45 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:12:38.701 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.701 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.701 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.701 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:12:38.702 04:52:45 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:12:38.702 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.702 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.702 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.702 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:12:38.702 04:52:45 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:12:38.702 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.702 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.702 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.702 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:12:38.702 04:52:45 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:12:38.702 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.702 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.702 04:52:45 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:12:38.702 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12342"' 00:12:38.702 04:52:45 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12342 00:12:38.702 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.702 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.702 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.702 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:12:38.702 04:52:45 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:12:38.702 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.702 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.702 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.702 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:12:38.702 04:52:45 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:12:38.702 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.702 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.702 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.702 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:12:38.702 04:52:45 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:12:38.702 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.702 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.702 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.702 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:12:38.702 04:52:45 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:12:38.702 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.702 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.702 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.702 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:12:38.702 04:52:45 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:12:38.702 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.702 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.702 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.702 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:12:38.702 04:52:45 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:12:38.702 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.702 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.702 04:52:45 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:38.702 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:38.702 04:52:45 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:38.702 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.702 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.702 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:38.702 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:38.702 04:52:45 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:38.702 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.702 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.702 04:52:45 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:38.702 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:12:38.702 04:52:45 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:12:38.702 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.702 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.702 04:52:45 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:12:38.702 04:52:45 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:38.702 04:52:45 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:12:38.702 04:52:45 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:12:38.702 04:52:45 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:12:38.702 04:52:45 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:12:38.702 04:52:45 -- nvme/functions.sh@18 -- # shift 00:12:38.702 04:52:45 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:12:38.702 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.702 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.702 04:52:45 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:12:38.702 04:52:45 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:38.702 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.702 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.702 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:38.702 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x100000"' 00:12:38.702 04:52:45 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x100000 00:12:38.702 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.702 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.702 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:38.702 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x100000"' 00:12:38.702 04:52:45 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x100000 00:12:38.702 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.702 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.702 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:38.702 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x100000"' 00:12:38.702 04:52:45 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x100000 00:12:38.702 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.702 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.702 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:38.702 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:12:38.702 04:52:45 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:12:38.702 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.702 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.702 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:38.702 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:12:38.702 04:52:45 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:12:38.702 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.702 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.702 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:38.702 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x4"' 00:12:38.702 04:52:45 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x4 00:12:38.702 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.702 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.702 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:38.702 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:12:38.702 04:52:45 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:12:38.702 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.702 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.702 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:38.702 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:12:38.702 04:52:45 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:12:38.702 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.702 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.702 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.702 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:12:38.702 04:52:45 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:12:38.702 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.702 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.702 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.702 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:12:38.702 04:52:45 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:12:38.702 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.702 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.702 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.703 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.703 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.703 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.703 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.703 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.703 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.703 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.703 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.703 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.703 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.703 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.703 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.703 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.703 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.703 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.703 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.703 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.703 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.703 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.703 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.703 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.703 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.703 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.703 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.703 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.703 04:52:45 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.703 04:52:45 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.703 04:52:45 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.703 04:52:45 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.703 04:52:45 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.703 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.703 04:52:45 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:38.703 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.704 04:52:45 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.704 04:52:45 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.704 04:52:45 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:12:38.704 04:52:45 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:38.704 04:52:45 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:12:38.704 04:52:45 -- nvme/functions.sh@56 -- # ns_dev=nvme1n2 00:12:38.704 04:52:45 -- nvme/functions.sh@57 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:12:38.704 04:52:45 -- nvme/functions.sh@17 -- # local ref=nvme1n2 reg val 00:12:38.704 04:52:45 -- nvme/functions.sh@18 -- # shift 00:12:38.704 04:52:45 -- nvme/functions.sh@20 -- # local -gA 'nvme1n2=()' 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.704 04:52:45 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n2 00:12:38.704 04:52:45 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.704 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsze]="0x100000"' 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # nvme1n2[nsze]=0x100000 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.704 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[ncap]="0x100000"' 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # nvme1n2[ncap]=0x100000 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.704 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nuse]="0x100000"' 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # nvme1n2[nuse]=0x100000 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.704 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # nvme1n2[nsfeat]=0x14 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.704 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nlbaf]="7"' 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # nvme1n2[nlbaf]=7 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.704 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[flbas]="0x4"' 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # nvme1n2[flbas]=0x4 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.704 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mc]="0x3"' 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # nvme1n2[mc]=0x3 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.704 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dpc]="0x1f"' 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # nvme1n2[dpc]=0x1f 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.704 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dps]="0"' 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # nvme1n2[dps]=0 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.704 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nmic]="0"' 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # nvme1n2[nmic]=0 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.704 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[rescap]="0"' 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # nvme1n2[rescap]=0 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.704 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[fpi]="0"' 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # nvme1n2[fpi]=0 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.704 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dlfeat]="1"' 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # nvme1n2[dlfeat]=1 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.704 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawun]="0"' 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # nvme1n2[nawun]=0 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.704 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawupf]="0"' 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # nvme1n2[nawupf]=0 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.704 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nacwu]="0"' 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # nvme1n2[nacwu]=0 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.704 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabsn]="0"' 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # nvme1n2[nabsn]=0 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.704 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabo]="0"' 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # nvme1n2[nabo]=0 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.704 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabspf]="0"' 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # nvme1n2[nabspf]=0 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.704 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[noiob]="0"' 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # nvme1n2[noiob]=0 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.704 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmcap]="0"' 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # nvme1n2[nvmcap]=0 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.704 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwg]="0"' 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # nvme1n2[npwg]=0 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.704 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwa]="0"' 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # nvme1n2[npwa]=0 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.704 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npdg]="0"' 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # nvme1n2[npdg]=0 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.704 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npda]="0"' 00:12:38.704 04:52:45 -- nvme/functions.sh@23 -- # nvme1n2[npda]=0 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.704 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.704 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nows]="0"' 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # nvme1n2[nows]=0 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.705 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mssrl]="128"' 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # nvme1n2[mssrl]=128 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.705 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mcl]="128"' 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # nvme1n2[mcl]=128 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.705 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[msrc]="127"' 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # nvme1n2[msrc]=127 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.705 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nulbaf]="0"' 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # nvme1n2[nulbaf]=0 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.705 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[anagrpid]="0"' 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # nvme1n2[anagrpid]=0 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.705 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsattr]="0"' 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # nvme1n2[nsattr]=0 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.705 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmsetid]="0"' 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # nvme1n2[nvmsetid]=0 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.705 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[endgid]="0"' 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # nvme1n2[endgid]=0 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.705 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.705 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # nvme1n2[eui64]=0000000000000000 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.705 04:52:45 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.705 04:52:45 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.705 04:52:45 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.705 04:52:45 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.705 04:52:45 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.705 04:52:45 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.705 04:52:45 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.705 04:52:45 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.705 04:52:45 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:12:38.705 04:52:45 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:38.705 04:52:45 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:12:38.705 04:52:45 -- nvme/functions.sh@56 -- # ns_dev=nvme1n3 00:12:38.705 04:52:45 -- nvme/functions.sh@57 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:12:38.705 04:52:45 -- nvme/functions.sh@17 -- # local ref=nvme1n3 reg val 00:12:38.705 04:52:45 -- nvme/functions.sh@18 -- # shift 00:12:38.705 04:52:45 -- nvme/functions.sh@20 -- # local -gA 'nvme1n3=()' 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.705 04:52:45 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n3 00:12:38.705 04:52:45 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.705 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsze]="0x100000"' 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # nvme1n3[nsze]=0x100000 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.705 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[ncap]="0x100000"' 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # nvme1n3[ncap]=0x100000 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.705 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nuse]="0x100000"' 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # nvme1n3[nuse]=0x100000 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.705 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # nvme1n3[nsfeat]=0x14 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.705 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nlbaf]="7"' 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # nvme1n3[nlbaf]=7 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.705 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[flbas]="0x4"' 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # nvme1n3[flbas]=0x4 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.705 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mc]="0x3"' 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # nvme1n3[mc]=0x3 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.705 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dpc]="0x1f"' 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # nvme1n3[dpc]=0x1f 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.705 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dps]="0"' 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # nvme1n3[dps]=0 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.705 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.705 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.705 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nmic]="0"' 00:12:38.706 04:52:45 -- nvme/functions.sh@23 -- # nvme1n3[nmic]=0 00:12:38.706 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.706 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.706 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.706 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[rescap]="0"' 00:12:38.706 04:52:45 -- nvme/functions.sh@23 -- # nvme1n3[rescap]=0 00:12:38.706 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.706 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.706 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.706 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[fpi]="0"' 00:12:38.706 04:52:45 -- nvme/functions.sh@23 -- # nvme1n3[fpi]=0 00:12:38.706 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.706 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.706 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:38.706 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dlfeat]="1"' 00:12:38.706 04:52:45 -- nvme/functions.sh@23 -- # nvme1n3[dlfeat]=1 00:12:38.706 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.706 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.706 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.706 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawun]="0"' 00:12:38.706 04:52:45 -- nvme/functions.sh@23 -- # nvme1n3[nawun]=0 00:12:38.706 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.706 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.706 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.706 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawupf]="0"' 00:12:38.706 04:52:45 -- nvme/functions.sh@23 -- # nvme1n3[nawupf]=0 00:12:38.706 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.706 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.706 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.706 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nacwu]="0"' 00:12:38.706 04:52:45 -- nvme/functions.sh@23 -- # nvme1n3[nacwu]=0 00:12:38.706 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.706 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.706 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.706 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabsn]="0"' 00:12:38.706 04:52:45 -- nvme/functions.sh@23 -- # nvme1n3[nabsn]=0 00:12:38.706 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.706 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.706 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.706 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabo]="0"' 00:12:38.706 04:52:45 -- nvme/functions.sh@23 -- # nvme1n3[nabo]=0 00:12:38.706 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.706 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.706 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.706 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabspf]="0"' 00:12:38.706 04:52:45 -- nvme/functions.sh@23 -- # nvme1n3[nabspf]=0 00:12:38.706 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.706 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.706 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.706 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[noiob]="0"' 00:12:38.706 04:52:45 -- nvme/functions.sh@23 -- # nvme1n3[noiob]=0 00:12:38.706 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.706 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.706 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.706 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmcap]="0"' 00:12:38.706 04:52:45 -- nvme/functions.sh@23 -- # nvme1n3[nvmcap]=0 00:12:38.706 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.706 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.706 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.706 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwg]="0"' 00:12:38.706 04:52:45 -- nvme/functions.sh@23 -- # nvme1n3[npwg]=0 00:12:38.706 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.706 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.706 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.706 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwa]="0"' 00:12:38.706 04:52:45 -- nvme/functions.sh@23 -- # nvme1n3[npwa]=0 00:12:38.706 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.706 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.706 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.706 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npdg]="0"' 00:12:38.706 04:52:45 -- nvme/functions.sh@23 -- # nvme1n3[npdg]=0 00:12:38.706 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.706 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.706 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.706 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npda]="0"' 00:12:38.706 04:52:45 -- nvme/functions.sh@23 -- # nvme1n3[npda]=0 00:12:38.706 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.706 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.706 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.706 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nows]="0"' 00:12:38.706 04:52:45 -- nvme/functions.sh@23 -- # nvme1n3[nows]=0 00:12:38.706 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.706 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.706 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:38.706 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mssrl]="128"' 00:12:38.706 04:52:45 -- nvme/functions.sh@23 -- # nvme1n3[mssrl]=128 00:12:38.706 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.706 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.706 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:38.706 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mcl]="128"' 00:12:38.706 04:52:45 -- nvme/functions.sh@23 -- # nvme1n3[mcl]=128 00:12:38.706 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.706 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.706 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:38.706 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[msrc]="127"' 00:12:38.706 04:52:45 -- nvme/functions.sh@23 -- # nvme1n3[msrc]=127 00:12:38.706 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.706 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.706 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.706 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nulbaf]="0"' 00:12:38.706 04:52:45 -- nvme/functions.sh@23 -- # nvme1n3[nulbaf]=0 00:12:38.706 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.706 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.706 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.706 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[anagrpid]="0"' 00:12:38.706 04:52:45 -- nvme/functions.sh@23 -- # nvme1n3[anagrpid]=0 00:12:38.706 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.706 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.706 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.706 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsattr]="0"' 00:12:38.706 04:52:45 -- nvme/functions.sh@23 -- # nvme1n3[nsattr]=0 00:12:38.706 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.706 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.706 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.706 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmsetid]="0"' 00:12:38.706 04:52:45 -- nvme/functions.sh@23 -- # nvme1n3[nvmsetid]=0 00:12:38.706 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.706 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.706 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.706 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[endgid]="0"' 00:12:38.706 04:52:45 -- nvme/functions.sh@23 -- # nvme1n3[endgid]=0 00:12:38.706 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.706 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.707 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:38.707 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:12:38.707 04:52:45 -- nvme/functions.sh@23 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:12:38.707 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.707 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.707 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:38.707 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:12:38.707 04:52:45 -- nvme/functions.sh@23 -- # nvme1n3[eui64]=0000000000000000 00:12:38.707 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.707 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.707 04:52:45 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:38.707 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:38.707 04:52:45 -- nvme/functions.sh@23 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:38.707 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.707 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.707 04:52:45 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:38.707 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:38.707 04:52:45 -- nvme/functions.sh@23 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:38.707 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.707 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.707 04:52:45 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:38.707 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:38.707 04:52:45 -- nvme/functions.sh@23 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:38.707 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.707 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.707 04:52:45 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:38.707 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:38.707 04:52:45 -- nvme/functions.sh@23 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:38.707 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.707 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.707 04:52:45 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:38.707 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:38.707 04:52:45 -- nvme/functions.sh@23 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:38.707 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.707 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.707 04:52:45 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:38.707 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:38.707 04:52:45 -- nvme/functions.sh@23 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:38.707 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.707 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.707 04:52:45 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:38.707 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:38.707 04:52:45 -- nvme/functions.sh@23 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:38.707 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.707 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.707 04:52:45 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:38.707 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:38.707 04:52:45 -- nvme/functions.sh@23 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:38.707 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.707 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.707 04:52:45 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:12:38.707 04:52:45 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:12:38.707 04:52:45 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:12:38.707 04:52:45 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:08.0 00:12:38.707 04:52:45 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:12:38.707 04:52:45 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:38.707 04:52:45 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:12:38.707 04:52:45 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:12:38.707 04:52:45 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:12:38.707 04:52:45 -- scripts/common.sh@15 -- # local i 00:12:38.707 04:52:45 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:12:38.707 04:52:45 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:38.707 04:52:45 -- scripts/common.sh@24 -- # return 0 00:12:38.707 04:52:45 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:12:38.707 04:52:45 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:12:38.707 04:52:45 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:12:38.707 04:52:45 -- nvme/functions.sh@18 -- # shift 00:12:38.707 04:52:45 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:12:38.707 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.707 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.707 04:52:45 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:12:38.707 04:52:45 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:38.707 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.707 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.707 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:38.707 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:12:38.707 04:52:45 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:12:38.707 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.707 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.707 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:38.707 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:12:38.707 04:52:45 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:12:38.707 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.707 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.707 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:12:38.707 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12340 "' 00:12:38.707 04:52:45 -- nvme/functions.sh@23 -- # nvme2[sn]='12340 ' 00:12:38.707 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.707 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.707 04:52:45 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:38.707 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:12:38.707 04:52:45 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:12:38.707 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.707 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.707 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:38.707 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:12:38.707 04:52:45 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:12:38.707 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.707 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.707 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:38.707 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:12:38.707 04:52:45 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:12:38.707 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.707 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.707 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:38.707 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:12:38.707 04:52:45 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:12:38.707 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.707 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.707 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.707 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:12:38.707 04:52:45 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:12:38.707 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.707 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.707 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:38.707 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:12:38.707 04:52:45 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:12:38.707 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.707 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.707 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.707 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:12:38.707 04:52:45 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:12:38.707 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.707 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.707 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:38.707 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:12:38.707 04:52:45 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:12:38.707 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.707 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.707 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.707 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:12:38.707 04:52:45 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:12:38.707 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.707 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.707 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.707 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:12:38.707 04:52:45 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:12:38.707 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.707 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.707 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:38.970 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:12:38.970 04:52:45 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:12:38.970 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.970 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.970 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:38.970 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:12:38.970 04:52:45 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:12:38.970 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.970 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.970 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.970 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:12:38.970 04:52:45 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:12:38.970 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.970 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.970 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:38.970 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:12:38.970 04:52:45 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:12:38.970 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.970 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.970 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:38.970 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:38.970 04:52:45 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:12:38.970 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.970 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.970 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.970 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:12:38.970 04:52:45 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:12:38.970 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.970 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.970 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.970 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:12:38.970 04:52:45 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:12:38.970 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.970 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.970 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.970 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:12:38.970 04:52:45 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:12:38.970 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.970 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.970 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.970 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:12:38.970 04:52:45 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:12:38.970 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.970 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.970 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.970 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:12:38.970 04:52:45 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:12:38.970 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.970 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.970 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.970 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:12:38.970 04:52:45 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:12:38.970 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.970 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.970 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:38.970 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:12:38.970 04:52:45 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:12:38.970 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.970 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.970 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.971 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.971 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.971 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.971 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.971 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.971 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.971 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.971 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.971 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.971 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.971 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.971 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.971 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.971 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.971 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.971 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.971 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.971 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.971 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.971 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.971 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.971 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.971 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.971 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.971 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.971 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.971 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.971 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:12:38.971 04:52:45 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.971 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.971 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.972 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.972 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.972 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.972 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.972 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.972 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.972 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.972 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.972 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.972 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.972 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.972 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.972 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.972 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.972 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.972 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.972 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.972 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.972 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.972 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.972 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.972 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.972 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.972 04:52:45 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12340"' 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12340 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.972 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.972 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.972 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.972 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.972 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.972 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.972 04:52:45 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:38.972 04:52:45 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.972 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.972 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:38.973 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:38.973 04:52:45 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.973 04:52:45 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:38.973 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:12:38.973 04:52:45 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.973 04:52:45 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:12:38.973 04:52:45 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:38.973 04:52:45 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:12:38.973 04:52:45 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:12:38.973 04:52:45 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:12:38.973 04:52:45 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:12:38.973 04:52:45 -- nvme/functions.sh@18 -- # shift 00:12:38.973 04:52:45 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.973 04:52:45 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:12:38.973 04:52:45 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.973 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:38.973 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x17a17a"' 00:12:38.973 04:52:45 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x17a17a 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.973 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:38.973 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x17a17a"' 00:12:38.973 04:52:45 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x17a17a 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.973 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:38.973 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x17a17a"' 00:12:38.973 04:52:45 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x17a17a 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.973 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:38.973 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:12:38.973 04:52:45 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.973 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:38.973 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:12:38.973 04:52:45 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.973 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:38.973 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x7"' 00:12:38.973 04:52:45 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x7 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.973 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:38.973 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:12:38.973 04:52:45 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.973 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:38.973 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:12:38.973 04:52:45 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.973 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.973 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:12:38.973 04:52:45 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.973 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.973 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:12:38.973 04:52:45 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.973 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.973 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:12:38.973 04:52:45 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.973 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.973 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:12:38.973 04:52:45 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.973 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:38.973 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:12:38.973 04:52:45 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.973 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.973 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:12:38.973 04:52:45 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.973 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.973 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:12:38.973 04:52:45 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.973 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.973 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:12:38.973 04:52:45 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.973 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.973 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:12:38.973 04:52:45 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.973 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.973 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:12:38.973 04:52:45 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.973 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.973 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:12:38.973 04:52:45 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.973 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.973 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:12:38.973 04:52:45 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.973 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.973 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:12:38.973 04:52:45 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.973 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.973 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:12:38.973 04:52:45 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.973 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.973 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:12:38.973 04:52:45 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.973 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.973 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:12:38.973 04:52:45 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.973 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.973 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.974 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:12:38.974 04:52:45 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:12:38.974 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.974 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.974 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.974 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:12:38.974 04:52:45 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:12:38.974 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.974 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.974 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:38.974 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:12:38.974 04:52:45 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:12:38.974 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.974 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.974 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:38.974 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:12:38.974 04:52:45 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:12:38.974 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.974 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.974 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:38.974 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:12:38.974 04:52:45 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:12:38.974 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.974 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.974 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.974 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:12:38.974 04:52:45 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:12:38.974 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.974 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.974 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.974 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:12:38.974 04:52:45 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:12:38.974 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.974 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.974 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.974 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:12:38.974 04:52:45 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:12:38.974 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.974 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.974 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.974 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:12:38.974 04:52:45 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:12:38.974 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.974 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.974 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.974 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:12:38.974 04:52:45 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:12:38.974 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.974 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.974 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:38.974 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:12:38.974 04:52:45 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:12:38.974 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.974 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.974 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:38.974 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:12:38.974 04:52:45 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:12:38.974 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.974 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.974 04:52:45 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:38.974 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:38.974 04:52:45 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:38.974 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.974 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.974 04:52:45 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:38.974 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:38.974 04:52:45 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:38.974 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.974 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.974 04:52:45 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:38.974 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:38.974 04:52:45 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:38.974 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.974 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.974 04:52:45 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:38.974 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:38.974 04:52:45 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:38.974 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.974 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.974 04:52:45 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:12:38.974 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:12:38.974 04:52:45 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:12:38.974 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.974 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.974 04:52:45 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:38.974 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:38.974 04:52:45 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:38.974 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.974 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.974 04:52:45 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:38.974 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:38.974 04:52:45 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:38.974 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.974 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.974 04:52:45 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:12:38.974 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:12:38.974 04:52:45 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:12:38.974 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.974 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.974 04:52:45 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:12:38.974 04:52:45 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:12:38.974 04:52:45 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:12:38.974 04:52:45 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:12:38.974 04:52:45 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:12:38.974 04:52:45 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:38.974 04:52:45 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:12:38.974 04:52:45 -- nvme/functions.sh@49 -- # pci=0000:00:07.0 00:12:38.974 04:52:45 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:07.0 00:12:38.974 04:52:45 -- scripts/common.sh@15 -- # local i 00:12:38.974 04:52:45 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:12:38.974 04:52:45 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:38.974 04:52:45 -- scripts/common.sh@24 -- # return 0 00:12:38.974 04:52:45 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:12:38.974 04:52:45 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:12:38.974 04:52:45 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:12:38.974 04:52:45 -- nvme/functions.sh@18 -- # shift 00:12:38.974 04:52:45 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:12:38.974 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.974 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.974 04:52:45 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:12:38.974 04:52:45 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:38.974 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.974 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.974 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:38.974 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:12:38.974 04:52:45 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:12:38.974 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.974 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.974 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:38.974 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:12:38.974 04:52:45 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:12:38.974 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.974 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.974 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:12:38.974 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12341 "' 00:12:38.974 04:52:45 -- nvme/functions.sh@23 -- # nvme3[sn]='12341 ' 00:12:38.974 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.974 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.974 04:52:45 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:38.974 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:12:38.974 04:52:45 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:12:38.974 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.975 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.975 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.975 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.975 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0"' 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # nvme3[cmic]=0 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.975 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.975 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.975 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.975 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.975 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.975 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.975 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x8000"' 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x8000 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.975 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.975 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.975 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.975 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.975 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.975 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.975 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.975 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.975 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.975 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.975 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.975 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.975 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.975 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.975 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.975 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.975 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.975 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:12:38.975 04:52:45 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.976 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.976 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.976 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.976 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.976 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.976 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.976 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.976 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.976 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.976 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.976 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.976 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.976 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.976 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.976 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.976 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.976 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.976 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.976 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.976 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.976 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="0"' 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=0 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.976 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.976 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.976 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.976 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.976 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.976 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.976 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.976 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.976 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.976 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:38.976 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:12:38.977 04:52:45 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.977 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.977 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:12:38.977 04:52:45 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.977 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:38.977 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:12:38.977 04:52:45 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.977 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:38.977 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:12:38.977 04:52:45 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.977 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.977 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:12:38.977 04:52:45 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.977 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.977 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:12:38.977 04:52:45 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.977 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:38.977 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:12:38.977 04:52:45 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.977 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.977 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:12:38.977 04:52:45 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.977 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.977 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:12:38.977 04:52:45 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.977 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.977 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:12:38.977 04:52:45 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.977 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.977 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:12:38.977 04:52:45 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.977 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.977 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:12:38.977 04:52:45 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.977 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:38.977 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:12:38.977 04:52:45 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.977 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:38.977 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:12:38.977 04:52:45 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.977 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.977 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:12:38.977 04:52:45 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.977 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.977 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:12:38.977 04:52:45 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.977 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.977 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:12:38.977 04:52:45 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.977 04:52:45 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:12:38.977 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:12341"' 00:12:38.977 04:52:45 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:12341 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.977 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.977 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:12:38.977 04:52:45 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.977 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.977 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:12:38.977 04:52:45 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.977 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.977 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:12:38.977 04:52:45 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.977 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.977 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:12:38.977 04:52:45 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.977 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.977 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:12:38.977 04:52:45 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.977 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.977 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:12:38.977 04:52:45 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.977 04:52:45 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:38.977 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:38.977 04:52:45 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.977 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:38.977 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:38.977 04:52:45 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.977 04:52:45 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:38.977 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:12:38.977 04:52:45 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.977 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.977 04:52:45 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:12:38.977 04:52:45 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:38.977 04:52:45 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme3/nvme3n1 ]] 00:12:38.978 04:52:45 -- nvme/functions.sh@56 -- # ns_dev=nvme3n1 00:12:38.978 04:52:45 -- nvme/functions.sh@57 -- # nvme_get nvme3n1 id-ns /dev/nvme3n1 00:12:38.978 04:52:45 -- nvme/functions.sh@17 -- # local ref=nvme3n1 reg val 00:12:38.978 04:52:45 -- nvme/functions.sh@18 -- # shift 00:12:38.978 04:52:45 -- nvme/functions.sh@20 -- # local -gA 'nvme3n1=()' 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.978 04:52:45 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme3n1 00:12:38.978 04:52:45 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.978 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:38.978 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsze]="0x140000"' 00:12:38.978 04:52:45 -- nvme/functions.sh@23 -- # nvme3n1[nsze]=0x140000 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.978 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:38.978 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[ncap]="0x140000"' 00:12:38.978 04:52:45 -- nvme/functions.sh@23 -- # nvme3n1[ncap]=0x140000 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.978 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:38.978 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nuse]="0x140000"' 00:12:38.978 04:52:45 -- nvme/functions.sh@23 -- # nvme3n1[nuse]=0x140000 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.978 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:38.978 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsfeat]="0x14"' 00:12:38.978 04:52:45 -- nvme/functions.sh@23 -- # nvme3n1[nsfeat]=0x14 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.978 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:38.978 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nlbaf]="7"' 00:12:38.978 04:52:45 -- nvme/functions.sh@23 -- # nvme3n1[nlbaf]=7 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.978 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:38.978 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[flbas]="0x4"' 00:12:38.978 04:52:45 -- nvme/functions.sh@23 -- # nvme3n1[flbas]=0x4 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.978 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:38.978 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mc]="0x3"' 00:12:38.978 04:52:45 -- nvme/functions.sh@23 -- # nvme3n1[mc]=0x3 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.978 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:38.978 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dpc]="0x1f"' 00:12:38.978 04:52:45 -- nvme/functions.sh@23 -- # nvme3n1[dpc]=0x1f 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.978 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.978 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dps]="0"' 00:12:38.978 04:52:45 -- nvme/functions.sh@23 -- # nvme3n1[dps]=0 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.978 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.978 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nmic]="0"' 00:12:38.978 04:52:45 -- nvme/functions.sh@23 -- # nvme3n1[nmic]=0 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.978 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.978 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[rescap]="0"' 00:12:38.978 04:52:45 -- nvme/functions.sh@23 -- # nvme3n1[rescap]=0 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.978 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.978 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[fpi]="0"' 00:12:38.978 04:52:45 -- nvme/functions.sh@23 -- # nvme3n1[fpi]=0 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.978 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:38.978 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dlfeat]="1"' 00:12:38.978 04:52:45 -- nvme/functions.sh@23 -- # nvme3n1[dlfeat]=1 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.978 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.978 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawun]="0"' 00:12:38.978 04:52:45 -- nvme/functions.sh@23 -- # nvme3n1[nawun]=0 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.978 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.978 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawupf]="0"' 00:12:38.978 04:52:45 -- nvme/functions.sh@23 -- # nvme3n1[nawupf]=0 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.978 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.978 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nacwu]="0"' 00:12:38.978 04:52:45 -- nvme/functions.sh@23 -- # nvme3n1[nacwu]=0 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.978 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.978 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabsn]="0"' 00:12:38.978 04:52:45 -- nvme/functions.sh@23 -- # nvme3n1[nabsn]=0 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.978 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.978 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabo]="0"' 00:12:38.978 04:52:45 -- nvme/functions.sh@23 -- # nvme3n1[nabo]=0 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.978 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.978 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabspf]="0"' 00:12:38.978 04:52:45 -- nvme/functions.sh@23 -- # nvme3n1[nabspf]=0 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.978 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.978 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[noiob]="0"' 00:12:38.978 04:52:45 -- nvme/functions.sh@23 -- # nvme3n1[noiob]=0 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.978 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.978 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmcap]="0"' 00:12:38.978 04:52:45 -- nvme/functions.sh@23 -- # nvme3n1[nvmcap]=0 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.978 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.978 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwg]="0"' 00:12:38.978 04:52:45 -- nvme/functions.sh@23 -- # nvme3n1[npwg]=0 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.978 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.978 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwa]="0"' 00:12:38.978 04:52:45 -- nvme/functions.sh@23 -- # nvme3n1[npwa]=0 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.978 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.978 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npdg]="0"' 00:12:38.978 04:52:45 -- nvme/functions.sh@23 -- # nvme3n1[npdg]=0 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.978 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.978 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npda]="0"' 00:12:38.978 04:52:45 -- nvme/functions.sh@23 -- # nvme3n1[npda]=0 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.978 04:52:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.978 04:52:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nows]="0"' 00:12:38.978 04:52:45 -- nvme/functions.sh@23 -- # nvme3n1[nows]=0 00:12:38.978 04:52:45 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.978 04:52:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.978 04:52:46 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:38.978 04:52:46 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mssrl]="128"' 00:12:38.978 04:52:46 -- nvme/functions.sh@23 -- # nvme3n1[mssrl]=128 00:12:38.978 04:52:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.978 04:52:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.978 04:52:46 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:38.978 04:52:46 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mcl]="128"' 00:12:38.978 04:52:46 -- nvme/functions.sh@23 -- # nvme3n1[mcl]=128 00:12:38.978 04:52:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.978 04:52:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.978 04:52:46 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:38.978 04:52:46 -- nvme/functions.sh@23 -- # eval 'nvme3n1[msrc]="127"' 00:12:38.978 04:52:46 -- nvme/functions.sh@23 -- # nvme3n1[msrc]=127 00:12:38.978 04:52:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.978 04:52:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.978 04:52:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.978 04:52:46 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nulbaf]="0"' 00:12:38.978 04:52:46 -- nvme/functions.sh@23 -- # nvme3n1[nulbaf]=0 00:12:38.979 04:52:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.979 04:52:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.979 04:52:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.979 04:52:46 -- nvme/functions.sh@23 -- # eval 'nvme3n1[anagrpid]="0"' 00:12:38.979 04:52:46 -- nvme/functions.sh@23 -- # nvme3n1[anagrpid]=0 00:12:38.979 04:52:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.979 04:52:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.979 04:52:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.979 04:52:46 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsattr]="0"' 00:12:38.979 04:52:46 -- nvme/functions.sh@23 -- # nvme3n1[nsattr]=0 00:12:38.979 04:52:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.979 04:52:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.979 04:52:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.979 04:52:46 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmsetid]="0"' 00:12:38.979 04:52:46 -- nvme/functions.sh@23 -- # nvme3n1[nvmsetid]=0 00:12:38.979 04:52:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.979 04:52:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.979 04:52:46 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:38.979 04:52:46 -- nvme/functions.sh@23 -- # eval 'nvme3n1[endgid]="0"' 00:12:38.979 04:52:46 -- nvme/functions.sh@23 -- # nvme3n1[endgid]=0 00:12:38.979 04:52:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.979 04:52:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.979 04:52:46 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:38.979 04:52:46 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nguid]="00000000000000000000000000000000"' 00:12:38.979 04:52:46 -- nvme/functions.sh@23 -- # nvme3n1[nguid]=00000000000000000000000000000000 00:12:38.979 04:52:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.979 04:52:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.979 04:52:46 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:38.979 04:52:46 -- nvme/functions.sh@23 -- # eval 'nvme3n1[eui64]="0000000000000000"' 00:12:38.979 04:52:46 -- nvme/functions.sh@23 -- # nvme3n1[eui64]=0000000000000000 00:12:38.979 04:52:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.979 04:52:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.979 04:52:46 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:38.979 04:52:46 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:38.979 04:52:46 -- nvme/functions.sh@23 -- # nvme3n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:38.979 04:52:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.979 04:52:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.979 04:52:46 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:38.979 04:52:46 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:38.979 04:52:46 -- nvme/functions.sh@23 -- # nvme3n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:38.979 04:52:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.979 04:52:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.979 04:52:46 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:38.979 04:52:46 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:38.979 04:52:46 -- nvme/functions.sh@23 -- # nvme3n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:38.979 04:52:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.979 04:52:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.979 04:52:46 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:38.979 04:52:46 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:38.979 04:52:46 -- nvme/functions.sh@23 -- # nvme3n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:38.979 04:52:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.979 04:52:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.979 04:52:46 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:38.979 04:52:46 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:38.979 04:52:46 -- nvme/functions.sh@23 -- # nvme3n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:38.979 04:52:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.979 04:52:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.979 04:52:46 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:38.979 04:52:46 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:38.979 04:52:46 -- nvme/functions.sh@23 -- # nvme3n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:38.979 04:52:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.979 04:52:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.979 04:52:46 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:38.979 04:52:46 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:38.979 04:52:46 -- nvme/functions.sh@23 -- # nvme3n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:38.979 04:52:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.979 04:52:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.979 04:52:46 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:38.979 04:52:46 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:38.979 04:52:46 -- nvme/functions.sh@23 -- # nvme3n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:38.979 04:52:46 -- nvme/functions.sh@21 -- # IFS=: 00:12:38.979 04:52:46 -- nvme/functions.sh@21 -- # read -r reg val 00:12:38.979 04:52:46 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme3n1 00:12:38.979 04:52:46 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:12:38.979 04:52:46 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:12:38.979 04:52:46 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:07.0 00:12:38.979 04:52:46 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:12:38.979 04:52:46 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:12:38.979 04:52:46 -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:12:38.979 04:52:46 -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:12:38.979 04:52:46 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:12:38.979 04:52:46 -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:12:38.979 04:52:46 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:12:38.979 04:52:46 -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:12:38.979 04:52:46 -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:12:38.979 04:52:46 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:12:38.979 04:52:46 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:38.979 04:52:46 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:12:38.979 04:52:46 -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:12:38.979 04:52:46 -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:12:38.979 04:52:46 -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:12:38.979 04:52:46 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:12:38.979 04:52:46 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:12:38.979 04:52:46 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:12:38.979 04:52:46 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:12:38.979 04:52:46 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:38.979 04:52:46 -- nvme/functions.sh@76 -- # echo 0x8000 00:12:38.979 04:52:46 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:12:38.979 04:52:46 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:12:38.979 04:52:46 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:38.979 04:52:46 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:12:38.979 04:52:46 -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:12:38.979 04:52:46 -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:12:38.979 04:52:46 -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:12:38.979 04:52:46 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:12:38.979 04:52:46 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:12:38.979 04:52:46 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:12:38.979 04:52:46 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:12:38.979 04:52:46 -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:12:38.979 04:52:46 -- nvme/functions.sh@76 -- # echo 0x88010 00:12:38.979 04:52:46 -- nvme/functions.sh@176 -- # ctratt=0x88010 00:12:38.979 04:52:46 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:12:38.979 04:52:46 -- nvme/functions.sh@197 -- # echo nvme0 00:12:38.979 04:52:46 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:38.979 04:52:46 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:12:38.979 04:52:46 -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:12:38.979 04:52:46 -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:12:38.979 04:52:46 -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:12:38.979 04:52:46 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:12:38.979 04:52:46 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:12:38.979 04:52:46 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:12:38.979 04:52:46 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:12:38.979 04:52:46 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:38.979 04:52:46 -- nvme/functions.sh@76 -- # echo 0x8000 00:12:38.979 04:52:46 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:12:38.979 04:52:46 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:12:38.979 04:52:46 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:38.979 04:52:46 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:12:38.979 04:52:46 -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:12:38.979 04:52:46 -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:12:38.979 04:52:46 -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:12:38.979 04:52:46 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:12:38.979 04:52:46 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:12:38.979 04:52:46 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:12:38.979 04:52:46 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:12:38.979 04:52:46 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:38.979 04:52:46 -- nvme/functions.sh@76 -- # echo 0x8000 00:12:38.979 04:52:46 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:12:38.979 04:52:46 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:12:38.979 04:52:46 -- nvme/functions.sh@204 -- # trap - ERR 00:12:38.979 04:52:46 -- nvme/functions.sh@204 -- # print_backtrace 00:12:38.979 04:52:46 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:12:38.979 04:52:46 -- common/autotest_common.sh@1132 -- # return 0 00:12:38.979 04:52:46 -- nvme/functions.sh@204 -- # trap - ERR 00:12:38.979 04:52:46 -- nvme/functions.sh@204 -- # print_backtrace 00:12:38.979 04:52:46 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:12:38.979 04:52:46 -- common/autotest_common.sh@1132 -- # return 0 00:12:38.979 04:52:46 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:12:38.979 04:52:46 -- nvme/functions.sh@206 -- # echo nvme0 00:12:38.979 04:52:46 -- nvme/functions.sh@207 -- # return 0 00:12:38.979 04:52:46 -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme0 00:12:38.979 04:52:46 -- nvme/nvme_fdp.sh@13 -- # bdf=0000:00:09.0 00:12:38.979 04:52:46 -- nvme/nvme_fdp.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:39.917 lsblk: /dev/nvme0c0n1: not a block device 00:12:40.176 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:40.176 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:12:40.176 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:12:40.176 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:12:40.435 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:12:40.435 04:52:47 -- nvme/nvme_fdp.sh@17 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:09.0' 00:12:40.435 04:52:47 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:12:40.435 04:52:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:40.435 04:52:47 -- common/autotest_common.sh@10 -- # set +x 00:12:40.435 ************************************ 00:12:40.435 START TEST nvme_flexible_data_placement 00:12:40.435 ************************************ 00:12:40.435 04:52:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:09.0' 00:12:40.694 Initializing NVMe Controllers 00:12:40.694 Attaching to 0000:00:09.0 00:12:40.694 Controller supports FDP Attached to 0000:00:09.0 00:12:40.694 Namespace ID: 1 Endurance Group ID: 1 00:12:40.694 Initialization complete. 00:12:40.694 00:12:40.694 ================================== 00:12:40.694 == FDP tests for Namespace: #01 == 00:12:40.694 ================================== 00:12:40.694 00:12:40.694 Get Feature: FDP: 00:12:40.694 ================= 00:12:40.694 Enabled: Yes 00:12:40.694 FDP configuration Index: 0 00:12:40.694 00:12:40.694 FDP configurations log page 00:12:40.694 =========================== 00:12:40.694 Number of FDP configurations: 1 00:12:40.694 Version: 0 00:12:40.694 Size: 112 00:12:40.694 FDP Configuration Descriptor: 0 00:12:40.694 Descriptor Size: 96 00:12:40.694 Reclaim Group Identifier format: 2 00:12:40.694 FDP Volatile Write Cache: Not Present 00:12:40.694 FDP Configuration: Valid 00:12:40.694 Vendor Specific Size: 0 00:12:40.694 Number of Reclaim Groups: 2 00:12:40.694 Number of Recalim Unit Handles: 8 00:12:40.694 Max Placement Identifiers: 128 00:12:40.694 Number of Namespaces Suppprted: 256 00:12:40.694 Reclaim unit Nominal Size: 6000000 bytes 00:12:40.694 Estimated Reclaim Unit Time Limit: Not Reported 00:12:40.694 RUH Desc #000: RUH Type: Initially Isolated 00:12:40.695 RUH Desc #001: RUH Type: Initially Isolated 00:12:40.695 RUH Desc #002: RUH Type: Initially Isolated 00:12:40.695 RUH Desc #003: RUH Type: Initially Isolated 00:12:40.695 RUH Desc #004: RUH Type: Initially Isolated 00:12:40.695 RUH Desc #005: RUH Type: Initially Isolated 00:12:40.695 RUH Desc #006: RUH Type: Initially Isolated 00:12:40.695 RUH Desc #007: RUH Type: Initially Isolated 00:12:40.695 00:12:40.695 FDP reclaim unit handle usage log page 00:12:40.695 ====================================== 00:12:40.695 Number of Reclaim Unit Handles: 8 00:12:40.695 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:12:40.695 RUH Usage Desc #001: RUH Attributes: Unused 00:12:40.695 RUH Usage Desc #002: RUH Attributes: Unused 00:12:40.695 RUH Usage Desc #003: RUH Attributes: Unused 00:12:40.695 RUH Usage Desc #004: RUH Attributes: Unused 00:12:40.695 RUH Usage Desc #005: RUH Attributes: Unused 00:12:40.695 RUH Usage Desc #006: RUH Attributes: Unused 00:12:40.695 RUH Usage Desc #007: RUH Attributes: Unused 00:12:40.695 00:12:40.695 FDP statistics log page 00:12:40.695 ======================= 00:12:40.695 Host bytes with metadata written: 794984448 00:12:40.695 Media bytes with metadata written: 795148288 00:12:40.695 Media bytes erased: 0 00:12:40.695 00:12:40.695 FDP Reclaim unit handle status 00:12:40.695 ============================== 00:12:40.695 Number of RUHS descriptors: 2 00:12:40.695 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000009d8 00:12:40.695 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:12:40.695 00:12:40.695 FDP write on placement id: 0 success 00:12:40.695 00:12:40.695 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:12:40.695 00:12:40.695 IO mgmt send: RUH update for Placement ID: #0 Success 00:12:40.695 00:12:40.695 Get Feature: FDP Events for Placement handle: #0 00:12:40.695 ======================== 00:12:40.695 Number of FDP Events: 6 00:12:40.695 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:12:40.695 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:12:40.695 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:12:40.695 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:12:40.695 FDP Event: #4 Type: Media Reallocated Enabled: No 00:12:40.695 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:12:40.695 00:12:40.695 FDP events log page 00:12:40.695 =================== 00:12:40.695 Number of FDP events: 1 00:12:40.695 FDP Event #0: 00:12:40.695 Event Type: RU Not Written to Capacity 00:12:40.695 Placement Identifier: Valid 00:12:40.695 NSID: Valid 00:12:40.695 Location: Valid 00:12:40.695 Placement Identifier: 0 00:12:40.695 Event Timestamp: c 00:12:40.695 Namespace Identifier: 1 00:12:40.695 Reclaim Group Identifier: 0 00:12:40.695 Reclaim Unit Handle Identifier: 0 00:12:40.695 00:12:40.695 FDP test passed 00:12:40.695 00:12:40.695 real 0m0.271s 00:12:40.695 user 0m0.090s 00:12:40.695 sys 0m0.080s 00:12:40.695 ************************************ 00:12:40.695 END TEST nvme_flexible_data_placement 00:12:40.695 ************************************ 00:12:40.695 04:52:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:40.695 04:52:47 -- common/autotest_common.sh@10 -- # set +x 00:12:40.695 ************************************ 00:12:40.695 END TEST nvme_fdp 00:12:40.695 ************************************ 00:12:40.695 00:12:40.695 real 0m8.215s 00:12:40.695 user 0m1.335s 00:12:40.695 sys 0m1.816s 00:12:40.695 04:52:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:40.695 04:52:47 -- common/autotest_common.sh@10 -- # set +x 00:12:40.695 04:52:47 -- spdk/autotest.sh@242 -- # [[ '' -eq 1 ]] 00:12:40.695 04:52:47 -- spdk/autotest.sh@246 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:12:40.695 04:52:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:40.695 04:52:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:40.695 04:52:47 -- common/autotest_common.sh@10 -- # set +x 00:12:40.695 ************************************ 00:12:40.695 START TEST nvme_rpc 00:12:40.695 ************************************ 00:12:40.695 04:52:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:12:40.954 * Looking for test storage... 00:12:40.954 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:40.954 04:52:47 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:40.954 04:52:47 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:12:40.954 04:52:47 -- common/autotest_common.sh@1509 -- # bdfs=() 00:12:40.954 04:52:47 -- common/autotest_common.sh@1509 -- # local bdfs 00:12:40.954 04:52:47 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:12:40.954 04:52:47 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:12:40.954 04:52:47 -- common/autotest_common.sh@1498 -- # bdfs=() 00:12:40.954 04:52:47 -- common/autotest_common.sh@1498 -- # local bdfs 00:12:40.954 04:52:47 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:40.954 04:52:47 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:40.954 04:52:47 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:12:40.954 04:52:47 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:12:40.954 04:52:47 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:12:40.954 04:52:47 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:12:40.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.954 04:52:47 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0 00:12:40.954 04:52:47 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67632 00:12:40.954 04:52:47 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:12:40.954 04:52:47 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:12:40.954 04:52:47 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67632 00:12:40.954 04:52:47 -- common/autotest_common.sh@819 -- # '[' -z 67632 ']' 00:12:40.954 04:52:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.954 04:52:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:40.954 04:52:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.954 04:52:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:40.954 04:52:47 -- common/autotest_common.sh@10 -- # set +x 00:12:40.954 [2024-05-12 04:52:48.053968] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:40.954 [2024-05-12 04:52:48.054312] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67632 ] 00:12:41.213 [2024-05-12 04:52:48.226713] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:41.472 [2024-05-12 04:52:48.439546] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:41.472 [2024-05-12 04:52:48.440121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.472 [2024-05-12 04:52:48.440148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:42.846 04:52:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:42.846 04:52:49 -- common/autotest_common.sh@852 -- # return 0 00:12:42.846 04:52:49 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:12:42.846 Nvme0n1 00:12:43.103 04:52:49 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:12:43.103 04:52:49 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:12:43.103 request: 00:12:43.103 { 00:12:43.103 "filename": "non_existing_file", 00:12:43.103 "bdev_name": "Nvme0n1", 00:12:43.103 "method": "bdev_nvme_apply_firmware", 00:12:43.103 "req_id": 1 00:12:43.103 } 00:12:43.103 Got JSON-RPC error response 00:12:43.103 response: 00:12:43.103 { 00:12:43.103 "code": -32603, 00:12:43.103 "message": "open file failed." 00:12:43.103 } 00:12:43.103 04:52:50 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:12:43.103 04:52:50 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:12:43.103 04:52:50 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:12:43.362 04:52:50 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:43.362 04:52:50 -- nvme/nvme_rpc.sh@40 -- # killprocess 67632 00:12:43.362 04:52:50 -- common/autotest_common.sh@926 -- # '[' -z 67632 ']' 00:12:43.362 04:52:50 -- common/autotest_common.sh@930 -- # kill -0 67632 00:12:43.362 04:52:50 -- common/autotest_common.sh@931 -- # uname 00:12:43.362 04:52:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:43.362 04:52:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67632 00:12:43.620 killing process with pid 67632 00:12:43.620 04:52:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:43.620 04:52:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:43.620 04:52:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67632' 00:12:43.620 04:52:50 -- common/autotest_common.sh@945 -- # kill 67632 00:12:43.620 04:52:50 -- common/autotest_common.sh@950 -- # wait 67632 00:12:45.540 ************************************ 00:12:45.540 END TEST nvme_rpc 00:12:45.540 ************************************ 00:12:45.540 00:12:45.540 real 0m4.581s 00:12:45.540 user 0m8.935s 00:12:45.540 sys 0m0.570s 00:12:45.540 04:52:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:45.540 04:52:52 -- common/autotest_common.sh@10 -- # set +x 00:12:45.540 04:52:52 -- spdk/autotest.sh@247 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:12:45.540 04:52:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:45.540 04:52:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:45.541 04:52:52 -- common/autotest_common.sh@10 -- # set +x 00:12:45.541 ************************************ 00:12:45.541 START TEST nvme_rpc_timeouts 00:12:45.541 ************************************ 00:12:45.541 04:52:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:12:45.541 * Looking for test storage... 00:12:45.541 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:45.541 04:52:52 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:45.541 04:52:52 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67716 00:12:45.541 04:52:52 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67716 00:12:45.541 04:52:52 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67743 00:12:45.541 04:52:52 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:12:45.541 04:52:52 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:12:45.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.541 04:52:52 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67743 00:12:45.541 04:52:52 -- common/autotest_common.sh@819 -- # '[' -z 67743 ']' 00:12:45.541 04:52:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.541 04:52:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:45.541 04:52:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.541 04:52:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:45.541 04:52:52 -- common/autotest_common.sh@10 -- # set +x 00:12:45.541 [2024-05-12 04:52:52.621100] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:45.541 [2024-05-12 04:52:52.621495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67743 ] 00:12:45.822 [2024-05-12 04:52:52.791828] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:46.082 [2024-05-12 04:52:52.967330] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:46.082 [2024-05-12 04:52:52.967992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.082 [2024-05-12 04:52:52.968015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:47.456 Checking default timeout settings: 00:12:47.456 04:52:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:47.456 04:52:54 -- common/autotest_common.sh@852 -- # return 0 00:12:47.456 04:52:54 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:12:47.456 04:52:54 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:47.714 Making settings changes with rpc: 00:12:47.714 04:52:54 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:12:47.714 04:52:54 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:12:47.972 Check default vs. modified settings: 00:12:47.972 04:52:54 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:12:47.972 04:52:54 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:48.231 04:52:55 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:12:48.231 04:52:55 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:48.231 04:52:55 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67716 00:12:48.231 04:52:55 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:48.231 04:52:55 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:48.231 04:52:55 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:12:48.231 04:52:55 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67716 00:12:48.231 04:52:55 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:48.231 04:52:55 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:48.231 Setting action_on_timeout is changed as expected. 00:12:48.231 04:52:55 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:12:48.231 04:52:55 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:12:48.231 04:52:55 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:12:48.231 04:52:55 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:48.231 04:52:55 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67716 00:12:48.231 04:52:55 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:48.231 04:52:55 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:48.231 04:52:55 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:12:48.231 04:52:55 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67716 00:12:48.231 04:52:55 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:48.231 04:52:55 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:48.231 Setting timeout_us is changed as expected. 00:12:48.231 04:52:55 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:12:48.231 04:52:55 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:12:48.231 04:52:55 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:12:48.231 04:52:55 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:48.231 04:52:55 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67716 00:12:48.231 04:52:55 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:48.231 04:52:55 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:48.231 04:52:55 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:12:48.231 04:52:55 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:48.231 04:52:55 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67716 00:12:48.231 04:52:55 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:48.231 04:52:55 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:12:48.231 04:52:55 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:12:48.231 Setting timeout_admin_us is changed as expected. 00:12:48.231 04:52:55 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:12:48.231 04:52:55 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:12:48.231 04:52:55 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67716 /tmp/settings_modified_67716 00:12:48.231 04:52:55 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67743 00:12:48.231 04:52:55 -- common/autotest_common.sh@926 -- # '[' -z 67743 ']' 00:12:48.231 04:52:55 -- common/autotest_common.sh@930 -- # kill -0 67743 00:12:48.231 04:52:55 -- common/autotest_common.sh@931 -- # uname 00:12:48.231 04:52:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:48.231 04:52:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67743 00:12:48.231 killing process with pid 67743 00:12:48.231 04:52:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:48.231 04:52:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:48.231 04:52:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67743' 00:12:48.231 04:52:55 -- common/autotest_common.sh@945 -- # kill 67743 00:12:48.231 04:52:55 -- common/autotest_common.sh@950 -- # wait 67743 00:12:50.131 RPC TIMEOUT SETTING TEST PASSED. 00:12:50.131 04:52:57 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:12:50.131 ************************************ 00:12:50.131 END TEST nvme_rpc_timeouts 00:12:50.131 ************************************ 00:12:50.131 00:12:50.131 real 0m4.775s 00:12:50.131 user 0m9.447s 00:12:50.131 sys 0m0.587s 00:12:50.131 04:52:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:50.131 04:52:57 -- common/autotest_common.sh@10 -- # set +x 00:12:50.131 04:52:57 -- spdk/autotest.sh@251 -- # '[' 1 -eq 0 ']' 00:12:50.131 04:52:57 -- spdk/autotest.sh@255 -- # [[ 1 -eq 1 ]] 00:12:50.131 04:52:57 -- spdk/autotest.sh@256 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:50.131 04:52:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:50.131 04:52:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:50.131 04:52:57 -- common/autotest_common.sh@10 -- # set +x 00:12:50.131 ************************************ 00:12:50.131 START TEST nvme_xnvme 00:12:50.131 ************************************ 00:12:50.131 04:52:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:50.389 * Looking for test storage... 00:12:50.389 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:50.389 04:52:57 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:50.389 04:52:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:50.389 04:52:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:50.389 04:52:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:50.389 04:52:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.389 04:52:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.389 04:52:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.389 04:52:57 -- paths/export.sh@5 -- # export PATH 00:12:50.389 04:52:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.389 04:52:57 -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:12:50.389 04:52:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:50.389 04:52:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:50.389 04:52:57 -- common/autotest_common.sh@10 -- # set +x 00:12:50.389 ************************************ 00:12:50.389 START TEST xnvme_to_malloc_dd_copy 00:12:50.389 ************************************ 00:12:50.389 04:52:57 -- common/autotest_common.sh@1104 -- # malloc_to_xnvme_copy 00:12:50.389 04:52:57 -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:12:50.389 04:52:57 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:12:50.389 04:52:57 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:12:50.389 04:52:57 -- dd/common.sh@191 -- # return 00:12:50.389 04:52:57 -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:12:50.389 04:52:57 -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:12:50.389 04:52:57 -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:50.389 04:52:57 -- xnvme/xnvme.sh@18 -- # local io 00:12:50.389 04:52:57 -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:12:50.389 04:52:57 -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:12:50.389 04:52:57 -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:12:50.389 04:52:57 -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:12:50.389 04:52:57 -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:12:50.389 04:52:57 -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:12:50.389 04:52:57 -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:12:50.389 04:52:57 -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:12:50.389 04:52:57 -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:50.389 04:52:57 -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:50.389 04:52:57 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:50.389 04:52:57 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:50.390 04:52:57 -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:50.390 04:52:57 -- dd/common.sh@31 -- # xtrace_disable 00:12:50.390 04:52:57 -- common/autotest_common.sh@10 -- # set +x 00:12:50.390 04:52:57 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:50.390 { 00:12:50.390 "subsystems": [ 00:12:50.390 { 00:12:50.390 "subsystem": "bdev", 00:12:50.390 "config": [ 00:12:50.390 { 00:12:50.390 "params": { 00:12:50.390 "block_size": 512, 00:12:50.390 "num_blocks": 2097152, 00:12:50.390 "name": "malloc0" 00:12:50.390 }, 00:12:50.390 "method": "bdev_malloc_create" 00:12:50.390 }, 00:12:50.390 { 00:12:50.390 "params": { 00:12:50.390 "io_mechanism": "libaio", 00:12:50.390 "filename": "/dev/nullb0", 00:12:50.390 "name": "null0" 00:12:50.390 }, 00:12:50.390 "method": "bdev_xnvme_create" 00:12:50.390 }, 00:12:50.390 { 00:12:50.390 "method": "bdev_wait_for_examine" 00:12:50.390 } 00:12:50.390 ] 00:12:50.390 } 00:12:50.390 ] 00:12:50.390 } 00:12:50.390 [2024-05-12 04:52:57.464925] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:50.390 [2024-05-12 04:52:57.465273] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67884 ] 00:12:50.648 [2024-05-12 04:52:57.638593] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.907 [2024-05-12 04:52:57.862805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.094  Copying: 175/1024 [MB] (175 MBps) Copying: 351/1024 [MB] (175 MBps) Copying: 527/1024 [MB] (176 MBps) Copying: 704/1024 [MB] (176 MBps) Copying: 881/1024 [MB] (177 MBps) Copying: 1024/1024 [MB] (average 176 MBps) 00:13:01.094 00:13:01.094 04:53:08 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:13:01.094 04:53:08 -- xnvme/xnvme.sh@47 -- # gen_conf 00:13:01.094 04:53:08 -- dd/common.sh@31 -- # xtrace_disable 00:13:01.094 04:53:08 -- common/autotest_common.sh@10 -- # set +x 00:13:01.094 { 00:13:01.094 "subsystems": [ 00:13:01.094 { 00:13:01.094 "subsystem": "bdev", 00:13:01.094 "config": [ 00:13:01.094 { 00:13:01.094 "params": { 00:13:01.094 "block_size": 512, 00:13:01.094 "num_blocks": 2097152, 00:13:01.094 "name": "malloc0" 00:13:01.094 }, 00:13:01.094 "method": "bdev_malloc_create" 00:13:01.094 }, 00:13:01.094 { 00:13:01.094 "params": { 00:13:01.094 "io_mechanism": "libaio", 00:13:01.094 "filename": "/dev/nullb0", 00:13:01.094 "name": "null0" 00:13:01.094 }, 00:13:01.094 "method": "bdev_xnvme_create" 00:13:01.094 }, 00:13:01.094 { 00:13:01.094 "method": "bdev_wait_for_examine" 00:13:01.094 } 00:13:01.094 ] 00:13:01.094 } 00:13:01.094 ] 00:13:01.094 } 00:13:01.094 [2024-05-12 04:53:08.116969] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:01.094 [2024-05-12 04:53:08.117129] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68000 ] 00:13:01.352 [2024-05-12 04:53:08.287685] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.352 [2024-05-12 04:53:08.455976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.489  Copying: 183/1024 [MB] (183 MBps) Copying: 368/1024 [MB] (185 MBps) Copying: 555/1024 [MB] (186 MBps) Copying: 740/1024 [MB] (185 MBps) Copying: 926/1024 [MB] (185 MBps) Copying: 1024/1024 [MB] (average 185 MBps) 00:13:11.489 00:13:11.490 04:53:18 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:13:11.490 04:53:18 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:11.490 04:53:18 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:13:11.490 04:53:18 -- xnvme/xnvme.sh@42 -- # gen_conf 00:13:11.490 04:53:18 -- dd/common.sh@31 -- # xtrace_disable 00:13:11.490 04:53:18 -- common/autotest_common.sh@10 -- # set +x 00:13:11.490 { 00:13:11.490 "subsystems": [ 00:13:11.490 { 00:13:11.490 "subsystem": "bdev", 00:13:11.490 "config": [ 00:13:11.490 { 00:13:11.490 "params": { 00:13:11.490 "block_size": 512, 00:13:11.490 "num_blocks": 2097152, 00:13:11.490 "name": "malloc0" 00:13:11.490 }, 00:13:11.490 "method": "bdev_malloc_create" 00:13:11.490 }, 00:13:11.490 { 00:13:11.490 "params": { 00:13:11.490 "io_mechanism": "io_uring", 00:13:11.490 "filename": "/dev/nullb0", 00:13:11.490 "name": "null0" 00:13:11.490 }, 00:13:11.490 "method": "bdev_xnvme_create" 00:13:11.490 }, 00:13:11.490 { 00:13:11.490 "method": "bdev_wait_for_examine" 00:13:11.490 } 00:13:11.490 ] 00:13:11.490 } 00:13:11.490 ] 00:13:11.490 } 00:13:11.490 [2024-05-12 04:53:18.430453] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:11.490 [2024-05-12 04:53:18.430777] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68119 ] 00:13:11.490 [2024-05-12 04:53:18.600827] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.749 [2024-05-12 04:53:18.776059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.666  Copying: 193/1024 [MB] (193 MBps) Copying: 389/1024 [MB] (195 MBps) Copying: 578/1024 [MB] (189 MBps) Copying: 766/1024 [MB] (188 MBps) Copying: 954/1024 [MB] (187 MBps) Copying: 1024/1024 [MB] (average 190 MBps) 00:13:21.666 00:13:21.666 04:53:28 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:13:21.666 04:53:28 -- xnvme/xnvme.sh@47 -- # gen_conf 00:13:21.666 04:53:28 -- dd/common.sh@31 -- # xtrace_disable 00:13:21.666 04:53:28 -- common/autotest_common.sh@10 -- # set +x 00:13:21.666 { 00:13:21.666 "subsystems": [ 00:13:21.666 { 00:13:21.666 "subsystem": "bdev", 00:13:21.666 "config": [ 00:13:21.666 { 00:13:21.666 "params": { 00:13:21.666 "block_size": 512, 00:13:21.666 "num_blocks": 2097152, 00:13:21.666 "name": "malloc0" 00:13:21.666 }, 00:13:21.666 "method": "bdev_malloc_create" 00:13:21.666 }, 00:13:21.666 { 00:13:21.666 "params": { 00:13:21.666 "io_mechanism": "io_uring", 00:13:21.666 "filename": "/dev/nullb0", 00:13:21.666 "name": "null0" 00:13:21.666 }, 00:13:21.666 "method": "bdev_xnvme_create" 00:13:21.666 }, 00:13:21.666 { 00:13:21.666 "method": "bdev_wait_for_examine" 00:13:21.666 } 00:13:21.666 ] 00:13:21.666 } 00:13:21.666 ] 00:13:21.666 } 00:13:21.666 [2024-05-12 04:53:28.663677] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:21.666 [2024-05-12 04:53:28.664573] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68235 ] 00:13:21.925 [2024-05-12 04:53:28.844080] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.925 [2024-05-12 04:53:29.024418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.754  Copying: 194/1024 [MB] (194 MBps) Copying: 388/1024 [MB] (193 MBps) Copying: 582/1024 [MB] (194 MBps) Copying: 776/1024 [MB] (194 MBps) Copying: 972/1024 [MB] (195 MBps) Copying: 1024/1024 [MB] (average 194 MBps) 00:13:31.754 00:13:31.754 04:53:38 -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:13:31.754 04:53:38 -- dd/common.sh@195 -- # modprobe -r null_blk 00:13:31.754 ************************************ 00:13:31.754 END TEST xnvme_to_malloc_dd_copy 00:13:31.755 ************************************ 00:13:31.755 00:13:31.755 real 0m41.469s 00:13:31.755 user 0m36.030s 00:13:31.755 sys 0m4.847s 00:13:31.755 04:53:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:31.755 04:53:38 -- common/autotest_common.sh@10 -- # set +x 00:13:31.755 04:53:38 -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:31.755 04:53:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:31.755 04:53:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:31.755 04:53:38 -- common/autotest_common.sh@10 -- # set +x 00:13:31.755 ************************************ 00:13:31.755 START TEST xnvme_bdevperf 00:13:31.755 ************************************ 00:13:31.755 04:53:38 -- common/autotest_common.sh@1104 -- # xnvme_bdevperf 00:13:31.755 04:53:38 -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:13:31.755 04:53:38 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:13:31.755 04:53:38 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:13:32.014 04:53:38 -- dd/common.sh@191 -- # return 00:13:32.014 04:53:38 -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:13:32.014 04:53:38 -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:13:32.014 04:53:38 -- xnvme/xnvme.sh@60 -- # local io 00:13:32.014 04:53:38 -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:13:32.014 04:53:38 -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:13:32.014 04:53:38 -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:13:32.014 04:53:38 -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:13:32.014 04:53:38 -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:13:32.014 04:53:38 -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:13:32.014 04:53:38 -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:13:32.014 04:53:38 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:13:32.014 04:53:38 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:32.014 04:53:38 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:13:32.014 04:53:38 -- xnvme/xnvme.sh@74 -- # gen_conf 00:13:32.014 04:53:38 -- dd/common.sh@31 -- # xtrace_disable 00:13:32.014 04:53:38 -- common/autotest_common.sh@10 -- # set +x 00:13:32.014 { 00:13:32.014 "subsystems": [ 00:13:32.014 { 00:13:32.014 "subsystem": "bdev", 00:13:32.014 "config": [ 00:13:32.014 { 00:13:32.014 "params": { 00:13:32.014 "io_mechanism": "libaio", 00:13:32.014 "filename": "/dev/nullb0", 00:13:32.014 "name": "null0" 00:13:32.014 }, 00:13:32.014 "method": "bdev_xnvme_create" 00:13:32.014 }, 00:13:32.014 { 00:13:32.014 "method": "bdev_wait_for_examine" 00:13:32.014 } 00:13:32.014 ] 00:13:32.014 } 00:13:32.014 ] 00:13:32.014 } 00:13:32.014 [2024-05-12 04:53:38.994165] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:32.014 [2024-05-12 04:53:38.994355] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68372 ] 00:13:32.273 [2024-05-12 04:53:39.161912] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.273 [2024-05-12 04:53:39.342875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.841 Running I/O for 5 seconds... 00:13:38.112 00:13:38.112 Latency(us) 00:13:38.112 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:38.112 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:38.112 null0 : 5.00 129427.97 505.58 0.00 0.00 491.49 150.81 1243.69 00:13:38.112 =================================================================================================================== 00:13:38.112 Total : 129427.97 505.58 0.00 0.00 491.49 150.81 1243.69 00:13:38.679 04:53:45 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:13:38.679 04:53:45 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:38.679 04:53:45 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:13:38.679 04:53:45 -- xnvme/xnvme.sh@74 -- # gen_conf 00:13:38.679 04:53:45 -- dd/common.sh@31 -- # xtrace_disable 00:13:38.679 04:53:45 -- common/autotest_common.sh@10 -- # set +x 00:13:38.679 { 00:13:38.679 "subsystems": [ 00:13:38.679 { 00:13:38.679 "subsystem": "bdev", 00:13:38.679 "config": [ 00:13:38.679 { 00:13:38.679 "params": { 00:13:38.679 "io_mechanism": "io_uring", 00:13:38.679 "filename": "/dev/nullb0", 00:13:38.679 "name": "null0" 00:13:38.679 }, 00:13:38.679 "method": "bdev_xnvme_create" 00:13:38.679 }, 00:13:38.679 { 00:13:38.679 "method": "bdev_wait_for_examine" 00:13:38.679 } 00:13:38.679 ] 00:13:38.679 } 00:13:38.679 ] 00:13:38.679 } 00:13:38.679 [2024-05-12 04:53:45.797693] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:38.679 [2024-05-12 04:53:45.797856] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68452 ] 00:13:38.937 [2024-05-12 04:53:45.968354] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.195 [2024-05-12 04:53:46.192984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.454 Running I/O for 5 seconds... 00:13:44.725 00:13:44.725 Latency(us) 00:13:44.725 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.725 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:44.725 null0 : 5.00 171141.88 668.52 0.00 0.00 371.08 217.83 860.16 00:13:44.725 =================================================================================================================== 00:13:44.725 Total : 171141.88 668.52 0.00 0.00 371.08 217.83 860.16 00:13:45.659 04:53:52 -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:13:45.659 04:53:52 -- dd/common.sh@195 -- # modprobe -r null_blk 00:13:45.659 00:13:45.659 real 0m13.630s 00:13:45.659 user 0m10.669s 00:13:45.659 sys 0m2.744s 00:13:45.659 04:53:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:45.659 04:53:52 -- common/autotest_common.sh@10 -- # set +x 00:13:45.659 ************************************ 00:13:45.659 END TEST xnvme_bdevperf 00:13:45.659 ************************************ 00:13:45.659 ************************************ 00:13:45.659 END TEST nvme_xnvme 00:13:45.659 ************************************ 00:13:45.659 00:13:45.659 real 0m55.295s 00:13:45.659 user 0m46.768s 00:13:45.659 sys 0m7.701s 00:13:45.659 04:53:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:45.660 04:53:52 -- common/autotest_common.sh@10 -- # set +x 00:13:45.660 04:53:52 -- spdk/autotest.sh@257 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:45.660 04:53:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:45.660 04:53:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:45.660 04:53:52 -- common/autotest_common.sh@10 -- # set +x 00:13:45.660 ************************************ 00:13:45.660 START TEST blockdev_xnvme 00:13:45.660 ************************************ 00:13:45.660 04:53:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:45.660 * Looking for test storage... 00:13:45.660 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:45.660 04:53:52 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:45.660 04:53:52 -- bdev/nbd_common.sh@6 -- # set -e 00:13:45.660 04:53:52 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:13:45.660 04:53:52 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:45.660 04:53:52 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:13:45.660 04:53:52 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:13:45.660 04:53:52 -- bdev/blockdev.sh@18 -- # : 00:13:45.660 04:53:52 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:13:45.660 04:53:52 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:13:45.660 04:53:52 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:13:45.660 04:53:52 -- bdev/blockdev.sh@672 -- # uname -s 00:13:45.660 04:53:52 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:13:45.660 04:53:52 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:13:45.660 04:53:52 -- bdev/blockdev.sh@680 -- # test_type=xnvme 00:13:45.660 04:53:52 -- bdev/blockdev.sh@681 -- # crypto_device= 00:13:45.660 04:53:52 -- bdev/blockdev.sh@682 -- # dek= 00:13:45.660 04:53:52 -- bdev/blockdev.sh@683 -- # env_ctx= 00:13:45.660 04:53:52 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:13:45.660 04:53:52 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:13:45.660 04:53:52 -- bdev/blockdev.sh@688 -- # [[ xnvme == bdev ]] 00:13:45.660 04:53:52 -- bdev/blockdev.sh@688 -- # [[ xnvme == crypto_* ]] 00:13:45.660 04:53:52 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:13:45.660 04:53:52 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=68591 00:13:45.660 04:53:52 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:45.660 04:53:52 -- bdev/blockdev.sh@47 -- # waitforlisten 68591 00:13:45.660 04:53:52 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:13:45.660 04:53:52 -- common/autotest_common.sh@819 -- # '[' -z 68591 ']' 00:13:45.660 04:53:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.660 04:53:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:45.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.660 04:53:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.660 04:53:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:45.660 04:53:52 -- common/autotest_common.sh@10 -- # set +x 00:13:45.918 [2024-05-12 04:53:52.788911] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:45.918 [2024-05-12 04:53:52.789061] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68591 ] 00:13:45.918 [2024-05-12 04:53:52.955333] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.177 [2024-05-12 04:53:53.183858] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:46.177 [2024-05-12 04:53:53.184090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.552 04:53:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:47.552 04:53:54 -- common/autotest_common.sh@852 -- # return 0 00:13:47.552 04:53:54 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:13:47.552 04:53:54 -- bdev/blockdev.sh@727 -- # setup_xnvme_conf 00:13:47.552 04:53:54 -- bdev/blockdev.sh@86 -- # local io_mechanism=io_uring 00:13:47.552 04:53:54 -- bdev/blockdev.sh@87 -- # local nvme nvmes 00:13:47.552 04:53:54 -- bdev/blockdev.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:47.811 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:48.069 Waiting for block devices as requested 00:13:48.069 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:13:48.069 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:13:48.069 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:13:48.327 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:13:53.599 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:13:53.599 04:54:00 -- bdev/blockdev.sh@90 -- # get_zoned_devs 00:13:53.599 04:54:00 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:13:53.599 04:54:00 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:13:53.599 04:54:00 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:13:53.599 04:54:00 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:13:53.599 04:54:00 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0c0n1 00:13:53.599 04:54:00 -- common/autotest_common.sh@1647 -- # local device=nvme0c0n1 00:13:53.599 04:54:00 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:13:53.599 04:54:00 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:13:53.599 04:54:00 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:13:53.599 04:54:00 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:13:53.599 04:54:00 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:13:53.599 04:54:00 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:53.599 04:54:00 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:13:53.599 04:54:00 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:13:53.599 04:54:00 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:13:53.599 04:54:00 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:13:53.599 04:54:00 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:13:53.599 04:54:00 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:13:53.599 04:54:00 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:13:53.599 04:54:00 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:13:53.599 04:54:00 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:13:53.599 04:54:00 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:13:53.599 04:54:00 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:13:53.599 04:54:00 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:13:53.599 04:54:00 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:13:53.599 04:54:00 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:13:53.599 04:54:00 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:13:53.599 04:54:00 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:13:53.599 04:54:00 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:13:53.599 04:54:00 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n1 00:13:53.599 04:54:00 -- common/autotest_common.sh@1647 -- # local device=nvme2n1 00:13:53.599 04:54:00 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:13:53.599 04:54:00 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:13:53.599 04:54:00 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:13:53.599 04:54:00 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3n1 00:13:53.599 04:54:00 -- common/autotest_common.sh@1647 -- # local device=nvme3n1 00:13:53.599 04:54:00 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:13:53.599 04:54:00 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:13:53.599 04:54:00 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:13:53.599 04:54:00 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme0n1 ]] 00:13:53.599 04:54:00 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:13:53.599 04:54:00 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:53.599 04:54:00 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:13:53.599 04:54:00 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n1 ]] 00:13:53.599 04:54:00 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:13:53.599 04:54:00 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:53.599 04:54:00 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:13:53.599 04:54:00 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n2 ]] 00:13:53.599 04:54:00 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:13:53.599 04:54:00 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:53.599 04:54:00 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:13:53.599 04:54:00 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n3 ]] 00:13:53.599 04:54:00 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:13:53.599 04:54:00 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:53.599 04:54:00 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:13:53.599 04:54:00 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme2n1 ]] 00:13:53.599 04:54:00 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:13:53.599 04:54:00 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:53.599 04:54:00 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:13:53.599 04:54:00 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme3n1 ]] 00:13:53.599 04:54:00 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:13:53.599 04:54:00 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:53.599 04:54:00 -- bdev/blockdev.sh@97 -- # (( 6 > 0 )) 00:13:53.599 04:54:00 -- bdev/blockdev.sh@98 -- # rpc_cmd 00:13:53.599 04:54:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:53.599 04:54:00 -- common/autotest_common.sh@10 -- # set +x 00:13:53.599 04:54:00 -- bdev/blockdev.sh@98 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme1n2 nvme1n2 io_uring' 'bdev_xnvme_create /dev/nvme1n3 nvme1n3 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:13:53.599 nvme0n1 00:13:53.599 nvme1n1 00:13:53.599 nvme1n2 00:13:53.599 nvme1n3 00:13:53.599 nvme2n1 00:13:53.599 nvme3n1 00:13:53.599 04:54:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:53.599 04:54:00 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:13:53.599 04:54:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:53.599 04:54:00 -- common/autotest_common.sh@10 -- # set +x 00:13:53.599 04:54:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:53.599 04:54:00 -- bdev/blockdev.sh@738 -- # cat 00:13:53.599 04:54:00 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:13:53.599 04:54:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:53.599 04:54:00 -- common/autotest_common.sh@10 -- # set +x 00:13:53.599 04:54:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:53.599 04:54:00 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:13:53.599 04:54:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:53.599 04:54:00 -- common/autotest_common.sh@10 -- # set +x 00:13:53.599 04:54:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:53.599 04:54:00 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:13:53.599 04:54:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:53.599 04:54:00 -- common/autotest_common.sh@10 -- # set +x 00:13:53.599 04:54:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:53.599 04:54:00 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:13:53.599 04:54:00 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:13:53.599 04:54:00 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:13:53.599 04:54:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:53.599 04:54:00 -- common/autotest_common.sh@10 -- # set +x 00:13:53.599 04:54:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:53.599 04:54:00 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:13:53.600 04:54:00 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "2014f0b5-8d5f-43e3-ac4c-69aa5812afb2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "2014f0b5-8d5f-43e3-ac4c-69aa5812afb2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "3d67da52-8a10-49b0-a2ed-e01464217cb5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3d67da52-8a10-49b0-a2ed-e01464217cb5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "fd7779c7-aff3-45b3-b3d1-96b3386c5390"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fd7779c7-aff3-45b3-b3d1-96b3386c5390",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "4189e85c-7e8e-4248-9798-d660a200239e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4189e85c-7e8e-4248-9798-d660a200239e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "19667078-a166-4e74-9ef3-d79891d1e1ba"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "19667078-a166-4e74-9ef3-d79891d1e1ba",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "ca33cb58-e886-4562-9a73-4ed5981f598b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "ca33cb58-e886-4562-9a73-4ed5981f598b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:13:53.600 04:54:00 -- bdev/blockdev.sh@747 -- # jq -r .name 00:13:53.600 04:54:00 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:13:53.600 04:54:00 -- bdev/blockdev.sh@750 -- # hello_world_bdev=nvme0n1 00:13:53.600 04:54:00 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:13:53.600 04:54:00 -- bdev/blockdev.sh@752 -- # killprocess 68591 00:13:53.600 04:54:00 -- common/autotest_common.sh@926 -- # '[' -z 68591 ']' 00:13:53.600 04:54:00 -- common/autotest_common.sh@930 -- # kill -0 68591 00:13:53.600 04:54:00 -- common/autotest_common.sh@931 -- # uname 00:13:53.600 04:54:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:53.600 04:54:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68591 00:13:53.600 04:54:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:53.600 04:54:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:53.600 killing process with pid 68591 00:13:53.600 04:54:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68591' 00:13:53.600 04:54:00 -- common/autotest_common.sh@945 -- # kill 68591 00:13:53.600 04:54:00 -- common/autotest_common.sh@950 -- # wait 68591 00:13:56.133 04:54:02 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:56.133 04:54:02 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:56.133 04:54:02 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:13:56.133 04:54:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:56.133 04:54:02 -- common/autotest_common.sh@10 -- # set +x 00:13:56.133 ************************************ 00:13:56.133 START TEST bdev_hello_world 00:13:56.133 ************************************ 00:13:56.133 04:54:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:56.133 [2024-05-12 04:54:02.780480] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:56.133 [2024-05-12 04:54:02.780634] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68985 ] 00:13:56.133 [2024-05-12 04:54:02.952522] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.133 [2024-05-12 04:54:03.133212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.391 [2024-05-12 04:54:03.516716] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:13:56.391 [2024-05-12 04:54:03.516769] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:13:56.391 [2024-05-12 04:54:03.516792] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:13:56.650 [2024-05-12 04:54:03.519131] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:13:56.650 [2024-05-12 04:54:03.519441] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:13:56.650 [2024-05-12 04:54:03.519473] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:13:56.650 [2024-05-12 04:54:03.519814] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:13:56.650 00:13:56.650 [2024-05-12 04:54:03.519854] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:13:57.585 00:13:57.585 real 0m1.868s 00:13:57.585 user 0m1.557s 00:13:57.585 sys 0m0.196s 00:13:57.585 04:54:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:57.585 04:54:04 -- common/autotest_common.sh@10 -- # set +x 00:13:57.585 ************************************ 00:13:57.585 END TEST bdev_hello_world 00:13:57.585 ************************************ 00:13:57.585 04:54:04 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:13:57.585 04:54:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:57.585 04:54:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:57.585 04:54:04 -- common/autotest_common.sh@10 -- # set +x 00:13:57.585 ************************************ 00:13:57.585 START TEST bdev_bounds 00:13:57.585 ************************************ 00:13:57.585 04:54:04 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:13:57.585 04:54:04 -- bdev/blockdev.sh@288 -- # bdevio_pid=69026 00:13:57.585 04:54:04 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:13:57.585 04:54:04 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:57.585 Process bdevio pid: 69026 00:13:57.585 04:54:04 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 69026' 00:13:57.585 04:54:04 -- bdev/blockdev.sh@291 -- # waitforlisten 69026 00:13:57.585 04:54:04 -- common/autotest_common.sh@819 -- # '[' -z 69026 ']' 00:13:57.585 04:54:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.585 04:54:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:57.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.585 04:54:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.585 04:54:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:57.585 04:54:04 -- common/autotest_common.sh@10 -- # set +x 00:13:57.585 [2024-05-12 04:54:04.698579] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:57.585 [2024-05-12 04:54:04.698751] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69026 ] 00:13:57.844 [2024-05-12 04:54:04.869874] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:58.102 [2024-05-12 04:54:05.051014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:58.102 [2024-05-12 04:54:05.051177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.102 [2024-05-12 04:54:05.052046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:58.670 04:54:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:58.670 04:54:05 -- common/autotest_common.sh@852 -- # return 0 00:13:58.670 04:54:05 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:58.670 I/O targets: 00:13:58.670 nvme0n1: 262144 blocks of 4096 bytes (1024 MiB) 00:13:58.670 nvme1n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:58.670 nvme1n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:58.670 nvme1n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:58.670 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:13:58.670 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:13:58.670 00:13:58.670 00:13:58.670 CUnit - A unit testing framework for C - Version 2.1-3 00:13:58.670 http://cunit.sourceforge.net/ 00:13:58.670 00:13:58.670 00:13:58.670 Suite: bdevio tests on: nvme3n1 00:13:58.670 Test: blockdev write read block ...passed 00:13:58.670 Test: blockdev write zeroes read block ...passed 00:13:58.670 Test: blockdev write zeroes read no split ...passed 00:13:58.670 Test: blockdev write zeroes read split ...passed 00:13:58.670 Test: blockdev write zeroes read split partial ...passed 00:13:58.670 Test: blockdev reset ...passed 00:13:58.670 Test: blockdev write read 8 blocks ...passed 00:13:58.670 Test: blockdev write read size > 128k ...passed 00:13:58.670 Test: blockdev write read invalid size ...passed 00:13:58.670 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:58.670 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:58.670 Test: blockdev write read max offset ...passed 00:13:58.670 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:58.670 Test: blockdev writev readv 8 blocks ...passed 00:13:58.670 Test: blockdev writev readv 30 x 1block ...passed 00:13:58.670 Test: blockdev writev readv block ...passed 00:13:58.670 Test: blockdev writev readv size > 128k ...passed 00:13:58.670 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:58.670 Test: blockdev comparev and writev ...passed 00:13:58.670 Test: blockdev nvme passthru rw ...passed 00:13:58.670 Test: blockdev nvme passthru vendor specific ...passed 00:13:58.670 Test: blockdev nvme admin passthru ...passed 00:13:58.670 Test: blockdev copy ...passed 00:13:58.670 Suite: bdevio tests on: nvme2n1 00:13:58.670 Test: blockdev write read block ...passed 00:13:58.670 Test: blockdev write zeroes read block ...passed 00:13:58.929 Test: blockdev write zeroes read no split ...passed 00:13:58.929 Test: blockdev write zeroes read split ...passed 00:13:58.929 Test: blockdev write zeroes read split partial ...passed 00:13:58.929 Test: blockdev reset ...passed 00:13:58.929 Test: blockdev write read 8 blocks ...passed 00:13:58.929 Test: blockdev write read size > 128k ...passed 00:13:58.929 Test: blockdev write read invalid size ...passed 00:13:58.929 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:58.929 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:58.929 Test: blockdev write read max offset ...passed 00:13:58.929 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:58.929 Test: blockdev writev readv 8 blocks ...passed 00:13:58.929 Test: blockdev writev readv 30 x 1block ...passed 00:13:58.929 Test: blockdev writev readv block ...passed 00:13:58.929 Test: blockdev writev readv size > 128k ...passed 00:13:58.929 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:58.929 Test: blockdev comparev and writev ...passed 00:13:58.929 Test: blockdev nvme passthru rw ...passed 00:13:58.929 Test: blockdev nvme passthru vendor specific ...passed 00:13:58.929 Test: blockdev nvme admin passthru ...passed 00:13:58.929 Test: blockdev copy ...passed 00:13:58.929 Suite: bdevio tests on: nvme1n3 00:13:58.929 Test: blockdev write read block ...passed 00:13:58.929 Test: blockdev write zeroes read block ...passed 00:13:58.929 Test: blockdev write zeroes read no split ...passed 00:13:58.929 Test: blockdev write zeroes read split ...passed 00:13:58.929 Test: blockdev write zeroes read split partial ...passed 00:13:58.929 Test: blockdev reset ...passed 00:13:58.929 Test: blockdev write read 8 blocks ...passed 00:13:58.929 Test: blockdev write read size > 128k ...passed 00:13:58.929 Test: blockdev write read invalid size ...passed 00:13:58.929 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:58.929 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:58.929 Test: blockdev write read max offset ...passed 00:13:58.929 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:58.929 Test: blockdev writev readv 8 blocks ...passed 00:13:58.929 Test: blockdev writev readv 30 x 1block ...passed 00:13:58.929 Test: blockdev writev readv block ...passed 00:13:58.929 Test: blockdev writev readv size > 128k ...passed 00:13:58.929 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:58.929 Test: blockdev comparev and writev ...passed 00:13:58.929 Test: blockdev nvme passthru rw ...passed 00:13:58.929 Test: blockdev nvme passthru vendor specific ...passed 00:13:58.929 Test: blockdev nvme admin passthru ...passed 00:13:58.929 Test: blockdev copy ...passed 00:13:58.929 Suite: bdevio tests on: nvme1n2 00:13:58.929 Test: blockdev write read block ...passed 00:13:58.929 Test: blockdev write zeroes read block ...passed 00:13:58.929 Test: blockdev write zeroes read no split ...passed 00:13:58.929 Test: blockdev write zeroes read split ...passed 00:13:58.929 Test: blockdev write zeroes read split partial ...passed 00:13:58.929 Test: blockdev reset ...passed 00:13:58.929 Test: blockdev write read 8 blocks ...passed 00:13:58.929 Test: blockdev write read size > 128k ...passed 00:13:58.929 Test: blockdev write read invalid size ...passed 00:13:58.929 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:58.929 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:58.929 Test: blockdev write read max offset ...passed 00:13:58.929 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:58.929 Test: blockdev writev readv 8 blocks ...passed 00:13:58.929 Test: blockdev writev readv 30 x 1block ...passed 00:13:58.929 Test: blockdev writev readv block ...passed 00:13:58.929 Test: blockdev writev readv size > 128k ...passed 00:13:58.929 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:58.929 Test: blockdev comparev and writev ...passed 00:13:58.929 Test: blockdev nvme passthru rw ...passed 00:13:58.929 Test: blockdev nvme passthru vendor specific ...passed 00:13:58.929 Test: blockdev nvme admin passthru ...passed 00:13:58.929 Test: blockdev copy ...passed 00:13:58.929 Suite: bdevio tests on: nvme1n1 00:13:58.929 Test: blockdev write read block ...passed 00:13:58.929 Test: blockdev write zeroes read block ...passed 00:13:58.929 Test: blockdev write zeroes read no split ...passed 00:13:59.187 Test: blockdev write zeroes read split ...passed 00:13:59.187 Test: blockdev write zeroes read split partial ...passed 00:13:59.187 Test: blockdev reset ...passed 00:13:59.187 Test: blockdev write read 8 blocks ...passed 00:13:59.187 Test: blockdev write read size > 128k ...passed 00:13:59.187 Test: blockdev write read invalid size ...passed 00:13:59.187 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:59.187 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:59.187 Test: blockdev write read max offset ...passed 00:13:59.187 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:59.187 Test: blockdev writev readv 8 blocks ...passed 00:13:59.187 Test: blockdev writev readv 30 x 1block ...passed 00:13:59.187 Test: blockdev writev readv block ...passed 00:13:59.187 Test: blockdev writev readv size > 128k ...passed 00:13:59.187 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:59.187 Test: blockdev comparev and writev ...passed 00:13:59.187 Test: blockdev nvme passthru rw ...passed 00:13:59.187 Test: blockdev nvme passthru vendor specific ...passed 00:13:59.187 Test: blockdev nvme admin passthru ...passed 00:13:59.187 Test: blockdev copy ...passed 00:13:59.187 Suite: bdevio tests on: nvme0n1 00:13:59.187 Test: blockdev write read block ...passed 00:13:59.187 Test: blockdev write zeroes read block ...passed 00:13:59.187 Test: blockdev write zeroes read no split ...passed 00:13:59.187 Test: blockdev write zeroes read split ...passed 00:13:59.187 Test: blockdev write zeroes read split partial ...passed 00:13:59.187 Test: blockdev reset ...passed 00:13:59.187 Test: blockdev write read 8 blocks ...passed 00:13:59.187 Test: blockdev write read size > 128k ...passed 00:13:59.187 Test: blockdev write read invalid size ...passed 00:13:59.187 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:59.187 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:59.187 Test: blockdev write read max offset ...passed 00:13:59.187 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:59.187 Test: blockdev writev readv 8 blocks ...passed 00:13:59.187 Test: blockdev writev readv 30 x 1block ...passed 00:13:59.187 Test: blockdev writev readv block ...passed 00:13:59.187 Test: blockdev writev readv size > 128k ...passed 00:13:59.187 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:59.187 Test: blockdev comparev and writev ...passed 00:13:59.187 Test: blockdev nvme passthru rw ...passed 00:13:59.187 Test: blockdev nvme passthru vendor specific ...passed 00:13:59.187 Test: blockdev nvme admin passthru ...passed 00:13:59.187 Test: blockdev copy ...passed 00:13:59.187 00:13:59.187 Run Summary: Type Total Ran Passed Failed Inactive 00:13:59.187 suites 6 6 n/a 0 0 00:13:59.187 tests 138 138 138 0 0 00:13:59.187 asserts 780 780 780 0 n/a 00:13:59.188 00:13:59.188 Elapsed time = 1.193 seconds 00:13:59.188 0 00:13:59.188 04:54:06 -- bdev/blockdev.sh@293 -- # killprocess 69026 00:13:59.188 04:54:06 -- common/autotest_common.sh@926 -- # '[' -z 69026 ']' 00:13:59.188 04:54:06 -- common/autotest_common.sh@930 -- # kill -0 69026 00:13:59.188 04:54:06 -- common/autotest_common.sh@931 -- # uname 00:13:59.188 04:54:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:59.188 04:54:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69026 00:13:59.188 04:54:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:59.188 04:54:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:59.188 killing process with pid 69026 00:13:59.188 04:54:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69026' 00:13:59.188 04:54:06 -- common/autotest_common.sh@945 -- # kill 69026 00:13:59.188 04:54:06 -- common/autotest_common.sh@950 -- # wait 69026 00:14:00.608 04:54:07 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:14:00.608 00:14:00.608 real 0m2.674s 00:14:00.608 user 0m6.396s 00:14:00.608 sys 0m0.366s 00:14:00.608 04:54:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:00.608 04:54:07 -- common/autotest_common.sh@10 -- # set +x 00:14:00.608 ************************************ 00:14:00.608 END TEST bdev_bounds 00:14:00.608 ************************************ 00:14:00.608 04:54:07 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:14:00.608 04:54:07 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:00.608 04:54:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:00.608 04:54:07 -- common/autotest_common.sh@10 -- # set +x 00:14:00.608 ************************************ 00:14:00.608 START TEST bdev_nbd 00:14:00.608 ************************************ 00:14:00.608 04:54:07 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:14:00.608 04:54:07 -- bdev/blockdev.sh@298 -- # uname -s 00:14:00.608 04:54:07 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:14:00.608 04:54:07 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:00.608 04:54:07 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:00.608 04:54:07 -- bdev/blockdev.sh@302 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:14:00.608 04:54:07 -- bdev/blockdev.sh@302 -- # local bdev_all 00:14:00.608 04:54:07 -- bdev/blockdev.sh@303 -- # local bdev_num=6 00:14:00.608 04:54:07 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:14:00.608 04:54:07 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:14:00.608 04:54:07 -- bdev/blockdev.sh@309 -- # local nbd_all 00:14:00.608 04:54:07 -- bdev/blockdev.sh@310 -- # bdev_num=6 00:14:00.608 04:54:07 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:00.608 04:54:07 -- bdev/blockdev.sh@312 -- # local nbd_list 00:14:00.608 04:54:07 -- bdev/blockdev.sh@313 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:14:00.608 04:54:07 -- bdev/blockdev.sh@313 -- # local bdev_list 00:14:00.608 04:54:07 -- bdev/blockdev.sh@316 -- # nbd_pid=69082 00:14:00.608 04:54:07 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:14:00.608 04:54:07 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:00.608 04:54:07 -- bdev/blockdev.sh@318 -- # waitforlisten 69082 /var/tmp/spdk-nbd.sock 00:14:00.608 04:54:07 -- common/autotest_common.sh@819 -- # '[' -z 69082 ']' 00:14:00.608 04:54:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:00.608 04:54:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:00.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:00.608 04:54:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:00.608 04:54:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:00.608 04:54:07 -- common/autotest_common.sh@10 -- # set +x 00:14:00.608 [2024-05-12 04:54:07.418680] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:00.608 [2024-05-12 04:54:07.418873] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:00.608 [2024-05-12 04:54:07.578836] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.867 [2024-05-12 04:54:07.760269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.433 04:54:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:01.433 04:54:08 -- common/autotest_common.sh@852 -- # return 0 00:14:01.433 04:54:08 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:14:01.433 04:54:08 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:01.433 04:54:08 -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:14:01.433 04:54:08 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:14:01.433 04:54:08 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:14:01.433 04:54:08 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:01.433 04:54:08 -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:14:01.433 04:54:08 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:14:01.433 04:54:08 -- bdev/nbd_common.sh@24 -- # local i 00:14:01.433 04:54:08 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:14:01.433 04:54:08 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:14:01.433 04:54:08 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:01.433 04:54:08 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:14:01.433 04:54:08 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:14:01.433 04:54:08 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:14:01.433 04:54:08 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:14:01.433 04:54:08 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:14:01.433 04:54:08 -- common/autotest_common.sh@857 -- # local i 00:14:01.433 04:54:08 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:01.433 04:54:08 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:01.433 04:54:08 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:14:01.691 04:54:08 -- common/autotest_common.sh@861 -- # break 00:14:01.691 04:54:08 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:01.691 04:54:08 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:01.691 04:54:08 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:01.691 1+0 records in 00:14:01.691 1+0 records out 00:14:01.691 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000472213 s, 8.7 MB/s 00:14:01.691 04:54:08 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.691 04:54:08 -- common/autotest_common.sh@874 -- # size=4096 00:14:01.691 04:54:08 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.691 04:54:08 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:01.691 04:54:08 -- common/autotest_common.sh@877 -- # return 0 00:14:01.691 04:54:08 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:01.691 04:54:08 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:01.691 04:54:08 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:14:01.950 04:54:08 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:14:01.950 04:54:08 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:14:01.950 04:54:08 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:14:01.950 04:54:08 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:14:01.950 04:54:08 -- common/autotest_common.sh@857 -- # local i 00:14:01.950 04:54:08 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:01.950 04:54:08 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:01.950 04:54:08 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:14:01.950 04:54:08 -- common/autotest_common.sh@861 -- # break 00:14:01.950 04:54:08 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:01.950 04:54:08 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:01.950 04:54:08 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:01.950 1+0 records in 00:14:01.950 1+0 records out 00:14:01.950 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000513019 s, 8.0 MB/s 00:14:01.950 04:54:08 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.950 04:54:08 -- common/autotest_common.sh@874 -- # size=4096 00:14:01.950 04:54:08 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.950 04:54:08 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:01.950 04:54:08 -- common/autotest_common.sh@877 -- # return 0 00:14:01.950 04:54:08 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:01.950 04:54:08 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:01.950 04:54:08 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 00:14:02.209 04:54:09 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:14:02.209 04:54:09 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:14:02.209 04:54:09 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:14:02.209 04:54:09 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:14:02.209 04:54:09 -- common/autotest_common.sh@857 -- # local i 00:14:02.209 04:54:09 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:02.209 04:54:09 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:02.209 04:54:09 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:14:02.209 04:54:09 -- common/autotest_common.sh@861 -- # break 00:14:02.209 04:54:09 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:02.209 04:54:09 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:02.209 04:54:09 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:02.209 1+0 records in 00:14:02.209 1+0 records out 00:14:02.209 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000483569 s, 8.5 MB/s 00:14:02.209 04:54:09 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.209 04:54:09 -- common/autotest_common.sh@874 -- # size=4096 00:14:02.209 04:54:09 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.209 04:54:09 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:02.209 04:54:09 -- common/autotest_common.sh@877 -- # return 0 00:14:02.209 04:54:09 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:02.209 04:54:09 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:02.209 04:54:09 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 00:14:02.468 04:54:09 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:14:02.468 04:54:09 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:14:02.468 04:54:09 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:14:02.468 04:54:09 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:14:02.468 04:54:09 -- common/autotest_common.sh@857 -- # local i 00:14:02.468 04:54:09 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:02.468 04:54:09 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:02.468 04:54:09 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:14:02.468 04:54:09 -- common/autotest_common.sh@861 -- # break 00:14:02.468 04:54:09 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:02.468 04:54:09 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:02.468 04:54:09 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:02.468 1+0 records in 00:14:02.468 1+0 records out 00:14:02.468 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000682169 s, 6.0 MB/s 00:14:02.468 04:54:09 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.468 04:54:09 -- common/autotest_common.sh@874 -- # size=4096 00:14:02.468 04:54:09 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.468 04:54:09 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:02.468 04:54:09 -- common/autotest_common.sh@877 -- # return 0 00:14:02.468 04:54:09 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:02.468 04:54:09 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:02.468 04:54:09 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:14:02.727 04:54:09 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:14:02.727 04:54:09 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:14:02.727 04:54:09 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:14:02.727 04:54:09 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:14:02.727 04:54:09 -- common/autotest_common.sh@857 -- # local i 00:14:02.727 04:54:09 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:02.727 04:54:09 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:02.727 04:54:09 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:14:02.727 04:54:09 -- common/autotest_common.sh@861 -- # break 00:14:02.727 04:54:09 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:02.727 04:54:09 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:02.727 04:54:09 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:02.727 1+0 records in 00:14:02.727 1+0 records out 00:14:02.727 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000838294 s, 4.9 MB/s 00:14:02.727 04:54:09 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.727 04:54:09 -- common/autotest_common.sh@874 -- # size=4096 00:14:02.727 04:54:09 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.727 04:54:09 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:02.727 04:54:09 -- common/autotest_common.sh@877 -- # return 0 00:14:02.727 04:54:09 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:02.727 04:54:09 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:02.727 04:54:09 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:14:02.985 04:54:10 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:14:02.986 04:54:10 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:14:02.986 04:54:10 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:14:02.986 04:54:10 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:14:02.986 04:54:10 -- common/autotest_common.sh@857 -- # local i 00:14:02.986 04:54:10 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:02.986 04:54:10 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:02.986 04:54:10 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:14:02.986 04:54:10 -- common/autotest_common.sh@861 -- # break 00:14:02.986 04:54:10 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:02.986 04:54:10 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:02.986 04:54:10 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:02.986 1+0 records in 00:14:02.986 1+0 records out 00:14:02.986 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000763925 s, 5.4 MB/s 00:14:02.986 04:54:10 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.986 04:54:10 -- common/autotest_common.sh@874 -- # size=4096 00:14:02.986 04:54:10 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.986 04:54:10 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:02.986 04:54:10 -- common/autotest_common.sh@877 -- # return 0 00:14:02.986 04:54:10 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:02.986 04:54:10 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:02.986 04:54:10 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:03.244 04:54:10 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:14:03.244 { 00:14:03.244 "nbd_device": "/dev/nbd0", 00:14:03.244 "bdev_name": "nvme0n1" 00:14:03.244 }, 00:14:03.244 { 00:14:03.244 "nbd_device": "/dev/nbd1", 00:14:03.244 "bdev_name": "nvme1n1" 00:14:03.244 }, 00:14:03.244 { 00:14:03.244 "nbd_device": "/dev/nbd2", 00:14:03.244 "bdev_name": "nvme1n2" 00:14:03.244 }, 00:14:03.244 { 00:14:03.244 "nbd_device": "/dev/nbd3", 00:14:03.244 "bdev_name": "nvme1n3" 00:14:03.244 }, 00:14:03.244 { 00:14:03.244 "nbd_device": "/dev/nbd4", 00:14:03.244 "bdev_name": "nvme2n1" 00:14:03.244 }, 00:14:03.244 { 00:14:03.244 "nbd_device": "/dev/nbd5", 00:14:03.244 "bdev_name": "nvme3n1" 00:14:03.244 } 00:14:03.244 ]' 00:14:03.244 04:54:10 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:14:03.244 04:54:10 -- bdev/nbd_common.sh@119 -- # echo '[ 00:14:03.244 { 00:14:03.244 "nbd_device": "/dev/nbd0", 00:14:03.244 "bdev_name": "nvme0n1" 00:14:03.244 }, 00:14:03.244 { 00:14:03.244 "nbd_device": "/dev/nbd1", 00:14:03.244 "bdev_name": "nvme1n1" 00:14:03.244 }, 00:14:03.244 { 00:14:03.244 "nbd_device": "/dev/nbd2", 00:14:03.244 "bdev_name": "nvme1n2" 00:14:03.244 }, 00:14:03.244 { 00:14:03.244 "nbd_device": "/dev/nbd3", 00:14:03.244 "bdev_name": "nvme1n3" 00:14:03.244 }, 00:14:03.244 { 00:14:03.244 "nbd_device": "/dev/nbd4", 00:14:03.244 "bdev_name": "nvme2n1" 00:14:03.244 }, 00:14:03.244 { 00:14:03.244 "nbd_device": "/dev/nbd5", 00:14:03.244 "bdev_name": "nvme3n1" 00:14:03.244 } 00:14:03.244 ]' 00:14:03.244 04:54:10 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:14:03.244 04:54:10 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:14:03.244 04:54:10 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:03.244 04:54:10 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:14:03.244 04:54:10 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:03.244 04:54:10 -- bdev/nbd_common.sh@51 -- # local i 00:14:03.244 04:54:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:03.244 04:54:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:03.503 04:54:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:03.503 04:54:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:03.503 04:54:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:03.503 04:54:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:03.503 04:54:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:03.503 04:54:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:03.503 04:54:10 -- bdev/nbd_common.sh@41 -- # break 00:14:03.503 04:54:10 -- bdev/nbd_common.sh@45 -- # return 0 00:14:03.503 04:54:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:03.503 04:54:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:03.762 04:54:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:04.021 04:54:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:04.021 04:54:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:04.021 04:54:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:04.021 04:54:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:04.021 04:54:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:04.021 04:54:10 -- bdev/nbd_common.sh@41 -- # break 00:14:04.021 04:54:10 -- bdev/nbd_common.sh@45 -- # return 0 00:14:04.021 04:54:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:04.021 04:54:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:14:04.021 04:54:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:14:04.021 04:54:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:14:04.021 04:54:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:14:04.021 04:54:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:04.021 04:54:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:04.021 04:54:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:14:04.021 04:54:11 -- bdev/nbd_common.sh@41 -- # break 00:14:04.021 04:54:11 -- bdev/nbd_common.sh@45 -- # return 0 00:14:04.021 04:54:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:04.021 04:54:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:14:04.279 04:54:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:14:04.279 04:54:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:14:04.279 04:54:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:14:04.279 04:54:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:04.279 04:54:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:04.279 04:54:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:14:04.279 04:54:11 -- bdev/nbd_common.sh@41 -- # break 00:14:04.279 04:54:11 -- bdev/nbd_common.sh@45 -- # return 0 00:14:04.279 04:54:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:04.279 04:54:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:14:04.538 04:54:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:14:04.796 04:54:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:14:04.796 04:54:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:14:04.796 04:54:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:04.796 04:54:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:04.796 04:54:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:14:04.796 04:54:11 -- bdev/nbd_common.sh@41 -- # break 00:14:04.796 04:54:11 -- bdev/nbd_common.sh@45 -- # return 0 00:14:04.796 04:54:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:04.796 04:54:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:14:05.055 04:54:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:14:05.055 04:54:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:14:05.055 04:54:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:14:05.055 04:54:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:05.055 04:54:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:05.055 04:54:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:14:05.055 04:54:11 -- bdev/nbd_common.sh@41 -- # break 00:14:05.055 04:54:11 -- bdev/nbd_common.sh@45 -- # return 0 00:14:05.055 04:54:11 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:05.055 04:54:11 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:05.055 04:54:11 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:05.316 04:54:12 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:05.316 04:54:12 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:05.316 04:54:12 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:05.316 04:54:12 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:05.316 04:54:12 -- bdev/nbd_common.sh@65 -- # echo '' 00:14:05.316 04:54:12 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:05.316 04:54:12 -- bdev/nbd_common.sh@65 -- # true 00:14:05.316 04:54:12 -- bdev/nbd_common.sh@65 -- # count=0 00:14:05.316 04:54:12 -- bdev/nbd_common.sh@66 -- # echo 0 00:14:05.316 04:54:12 -- bdev/nbd_common.sh@122 -- # count=0 00:14:05.316 04:54:12 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:14:05.316 04:54:12 -- bdev/nbd_common.sh@127 -- # return 0 00:14:05.316 04:54:12 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:05.316 04:54:12 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:05.316 04:54:12 -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:14:05.316 04:54:12 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:14:05.316 04:54:12 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:05.316 04:54:12 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:14:05.316 04:54:12 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:05.316 04:54:12 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:05.316 04:54:12 -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:14:05.316 04:54:12 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:05.316 04:54:12 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:05.316 04:54:12 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:05.316 04:54:12 -- bdev/nbd_common.sh@12 -- # local i 00:14:05.316 04:54:12 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:05.317 04:54:12 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:05.317 04:54:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:14:05.575 /dev/nbd0 00:14:05.575 04:54:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:05.575 04:54:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:05.575 04:54:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:14:05.575 04:54:12 -- common/autotest_common.sh@857 -- # local i 00:14:05.575 04:54:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:05.575 04:54:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:05.575 04:54:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:14:05.575 04:54:12 -- common/autotest_common.sh@861 -- # break 00:14:05.575 04:54:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:05.575 04:54:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:05.575 04:54:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:05.575 1+0 records in 00:14:05.575 1+0 records out 00:14:05.575 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000349515 s, 11.7 MB/s 00:14:05.575 04:54:12 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.575 04:54:12 -- common/autotest_common.sh@874 -- # size=4096 00:14:05.575 04:54:12 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.575 04:54:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:05.575 04:54:12 -- common/autotest_common.sh@877 -- # return 0 00:14:05.575 04:54:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:05.575 04:54:12 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:05.575 04:54:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:14:05.833 /dev/nbd1 00:14:05.833 04:54:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:05.833 04:54:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:05.833 04:54:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:14:05.833 04:54:12 -- common/autotest_common.sh@857 -- # local i 00:14:05.833 04:54:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:05.833 04:54:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:05.833 04:54:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:14:05.833 04:54:12 -- common/autotest_common.sh@861 -- # break 00:14:05.833 04:54:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:05.833 04:54:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:05.833 04:54:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:05.833 1+0 records in 00:14:05.833 1+0 records out 00:14:05.833 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000447681 s, 9.1 MB/s 00:14:05.833 04:54:12 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.833 04:54:12 -- common/autotest_common.sh@874 -- # size=4096 00:14:05.833 04:54:12 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.833 04:54:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:05.833 04:54:12 -- common/autotest_common.sh@877 -- # return 0 00:14:05.833 04:54:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:05.833 04:54:12 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:05.833 04:54:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 /dev/nbd10 00:14:06.092 /dev/nbd10 00:14:06.092 04:54:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:14:06.092 04:54:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:14:06.092 04:54:13 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:14:06.092 04:54:13 -- common/autotest_common.sh@857 -- # local i 00:14:06.092 04:54:13 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:06.092 04:54:13 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:06.092 04:54:13 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:14:06.092 04:54:13 -- common/autotest_common.sh@861 -- # break 00:14:06.092 04:54:13 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:06.092 04:54:13 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:06.092 04:54:13 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:06.092 1+0 records in 00:14:06.092 1+0 records out 00:14:06.092 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00062012 s, 6.6 MB/s 00:14:06.092 04:54:13 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.092 04:54:13 -- common/autotest_common.sh@874 -- # size=4096 00:14:06.092 04:54:13 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.092 04:54:13 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:06.092 04:54:13 -- common/autotest_common.sh@877 -- # return 0 00:14:06.092 04:54:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:06.092 04:54:13 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:06.092 04:54:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 /dev/nbd11 00:14:06.351 /dev/nbd11 00:14:06.351 04:54:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:14:06.351 04:54:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:14:06.351 04:54:13 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:14:06.351 04:54:13 -- common/autotest_common.sh@857 -- # local i 00:14:06.351 04:54:13 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:06.351 04:54:13 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:06.351 04:54:13 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:14:06.351 04:54:13 -- common/autotest_common.sh@861 -- # break 00:14:06.351 04:54:13 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:06.351 04:54:13 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:06.351 04:54:13 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:06.351 1+0 records in 00:14:06.351 1+0 records out 00:14:06.351 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000715582 s, 5.7 MB/s 00:14:06.351 04:54:13 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.351 04:54:13 -- common/autotest_common.sh@874 -- # size=4096 00:14:06.351 04:54:13 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.351 04:54:13 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:06.351 04:54:13 -- common/autotest_common.sh@877 -- # return 0 00:14:06.351 04:54:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:06.351 04:54:13 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:06.351 04:54:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:14:06.610 /dev/nbd12 00:14:06.610 04:54:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:14:06.610 04:54:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:14:06.610 04:54:13 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:14:06.610 04:54:13 -- common/autotest_common.sh@857 -- # local i 00:14:06.610 04:54:13 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:06.610 04:54:13 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:06.610 04:54:13 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:14:06.610 04:54:13 -- common/autotest_common.sh@861 -- # break 00:14:06.610 04:54:13 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:06.610 04:54:13 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:06.610 04:54:13 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:06.610 1+0 records in 00:14:06.610 1+0 records out 00:14:06.610 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000630151 s, 6.5 MB/s 00:14:06.610 04:54:13 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.610 04:54:13 -- common/autotest_common.sh@874 -- # size=4096 00:14:06.610 04:54:13 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.610 04:54:13 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:06.610 04:54:13 -- common/autotest_common.sh@877 -- # return 0 00:14:06.610 04:54:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:06.610 04:54:13 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:06.610 04:54:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:14:06.869 /dev/nbd13 00:14:06.869 04:54:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:14:06.869 04:54:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:14:06.869 04:54:13 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:14:06.869 04:54:13 -- common/autotest_common.sh@857 -- # local i 00:14:06.869 04:54:13 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:06.869 04:54:13 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:06.869 04:54:13 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:14:06.869 04:54:13 -- common/autotest_common.sh@861 -- # break 00:14:06.869 04:54:13 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:06.869 04:54:13 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:06.869 04:54:13 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:06.869 1+0 records in 00:14:06.869 1+0 records out 00:14:06.869 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00077283 s, 5.3 MB/s 00:14:06.869 04:54:13 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.869 04:54:13 -- common/autotest_common.sh@874 -- # size=4096 00:14:06.869 04:54:13 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.869 04:54:13 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:06.869 04:54:13 -- common/autotest_common.sh@877 -- # return 0 00:14:06.869 04:54:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:06.869 04:54:13 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:06.869 04:54:13 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:06.869 04:54:13 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:06.869 04:54:13 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:07.128 04:54:14 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:07.128 { 00:14:07.128 "nbd_device": "/dev/nbd0", 00:14:07.128 "bdev_name": "nvme0n1" 00:14:07.128 }, 00:14:07.128 { 00:14:07.128 "nbd_device": "/dev/nbd1", 00:14:07.128 "bdev_name": "nvme1n1" 00:14:07.128 }, 00:14:07.128 { 00:14:07.128 "nbd_device": "/dev/nbd10", 00:14:07.128 "bdev_name": "nvme1n2" 00:14:07.128 }, 00:14:07.128 { 00:14:07.128 "nbd_device": "/dev/nbd11", 00:14:07.128 "bdev_name": "nvme1n3" 00:14:07.128 }, 00:14:07.128 { 00:14:07.128 "nbd_device": "/dev/nbd12", 00:14:07.128 "bdev_name": "nvme2n1" 00:14:07.128 }, 00:14:07.128 { 00:14:07.128 "nbd_device": "/dev/nbd13", 00:14:07.128 "bdev_name": "nvme3n1" 00:14:07.128 } 00:14:07.128 ]' 00:14:07.128 04:54:14 -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:07.128 { 00:14:07.128 "nbd_device": "/dev/nbd0", 00:14:07.128 "bdev_name": "nvme0n1" 00:14:07.128 }, 00:14:07.128 { 00:14:07.128 "nbd_device": "/dev/nbd1", 00:14:07.128 "bdev_name": "nvme1n1" 00:14:07.128 }, 00:14:07.128 { 00:14:07.128 "nbd_device": "/dev/nbd10", 00:14:07.128 "bdev_name": "nvme1n2" 00:14:07.128 }, 00:14:07.128 { 00:14:07.128 "nbd_device": "/dev/nbd11", 00:14:07.128 "bdev_name": "nvme1n3" 00:14:07.128 }, 00:14:07.128 { 00:14:07.128 "nbd_device": "/dev/nbd12", 00:14:07.128 "bdev_name": "nvme2n1" 00:14:07.128 }, 00:14:07.128 { 00:14:07.128 "nbd_device": "/dev/nbd13", 00:14:07.128 "bdev_name": "nvme3n1" 00:14:07.128 } 00:14:07.128 ]' 00:14:07.128 04:54:14 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:07.128 04:54:14 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:14:07.128 /dev/nbd1 00:14:07.128 /dev/nbd10 00:14:07.128 /dev/nbd11 00:14:07.128 /dev/nbd12 00:14:07.128 /dev/nbd13' 00:14:07.128 04:54:14 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:07.128 04:54:14 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:14:07.128 /dev/nbd1 00:14:07.128 /dev/nbd10 00:14:07.128 /dev/nbd11 00:14:07.128 /dev/nbd12 00:14:07.128 /dev/nbd13' 00:14:07.128 04:54:14 -- bdev/nbd_common.sh@65 -- # count=6 00:14:07.128 04:54:14 -- bdev/nbd_common.sh@66 -- # echo 6 00:14:07.128 04:54:14 -- bdev/nbd_common.sh@95 -- # count=6 00:14:07.128 04:54:14 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:14:07.128 04:54:14 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:14:07.128 04:54:14 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:07.128 04:54:14 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:07.128 04:54:14 -- bdev/nbd_common.sh@71 -- # local operation=write 00:14:07.128 04:54:14 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:07.128 04:54:14 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:14:07.128 04:54:14 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:14:07.128 256+0 records in 00:14:07.128 256+0 records out 00:14:07.128 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00662881 s, 158 MB/s 00:14:07.128 04:54:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:07.128 04:54:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:14:07.387 256+0 records in 00:14:07.387 256+0 records out 00:14:07.387 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.180507 s, 5.8 MB/s 00:14:07.387 04:54:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:07.387 04:54:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:14:07.387 256+0 records in 00:14:07.387 256+0 records out 00:14:07.387 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.168623 s, 6.2 MB/s 00:14:07.387 04:54:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:07.387 04:54:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:14:07.646 256+0 records in 00:14:07.646 256+0 records out 00:14:07.646 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.175176 s, 6.0 MB/s 00:14:07.646 04:54:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:07.646 04:54:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:14:07.905 256+0 records in 00:14:07.905 256+0 records out 00:14:07.905 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.156718 s, 6.7 MB/s 00:14:07.905 04:54:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:07.905 04:54:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:14:07.905 256+0 records in 00:14:07.905 256+0 records out 00:14:07.905 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.182731 s, 5.7 MB/s 00:14:07.905 04:54:15 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:07.905 04:54:15 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:14:08.165 256+0 records in 00:14:08.165 256+0 records out 00:14:08.165 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.170518 s, 6.1 MB/s 00:14:08.165 04:54:15 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:14:08.165 04:54:15 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:08.165 04:54:15 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:08.165 04:54:15 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:14:08.165 04:54:15 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:08.165 04:54:15 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:14:08.165 04:54:15 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:14:08.165 04:54:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:08.165 04:54:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:14:08.165 04:54:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:08.165 04:54:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:14:08.165 04:54:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:08.165 04:54:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:14:08.165 04:54:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:08.165 04:54:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:14:08.165 04:54:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:08.165 04:54:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:14:08.165 04:54:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:08.165 04:54:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:14:08.165 04:54:15 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:08.165 04:54:15 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:08.165 04:54:15 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:08.165 04:54:15 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:08.165 04:54:15 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:08.165 04:54:15 -- bdev/nbd_common.sh@51 -- # local i 00:14:08.165 04:54:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:08.165 04:54:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:08.423 04:54:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:08.423 04:54:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:08.423 04:54:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:08.423 04:54:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:08.423 04:54:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:08.423 04:54:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:08.423 04:54:15 -- bdev/nbd_common.sh@41 -- # break 00:14:08.423 04:54:15 -- bdev/nbd_common.sh@45 -- # return 0 00:14:08.423 04:54:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:08.424 04:54:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:08.682 04:54:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:08.682 04:54:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:08.682 04:54:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:08.682 04:54:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:08.682 04:54:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:08.682 04:54:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:08.682 04:54:15 -- bdev/nbd_common.sh@41 -- # break 00:14:08.682 04:54:15 -- bdev/nbd_common.sh@45 -- # return 0 00:14:08.682 04:54:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:08.682 04:54:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:14:08.940 04:54:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:14:08.940 04:54:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:14:08.940 04:54:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:14:08.940 04:54:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:08.940 04:54:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:08.940 04:54:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:14:08.940 04:54:16 -- bdev/nbd_common.sh@41 -- # break 00:14:08.940 04:54:16 -- bdev/nbd_common.sh@45 -- # return 0 00:14:08.940 04:54:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:08.940 04:54:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:14:09.198 04:54:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:14:09.198 04:54:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:14:09.198 04:54:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:14:09.198 04:54:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:09.198 04:54:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:09.198 04:54:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:14:09.198 04:54:16 -- bdev/nbd_common.sh@41 -- # break 00:14:09.198 04:54:16 -- bdev/nbd_common.sh@45 -- # return 0 00:14:09.198 04:54:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:09.198 04:54:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:14:09.457 04:54:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:14:09.457 04:54:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:14:09.457 04:54:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:14:09.457 04:54:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:09.457 04:54:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:09.457 04:54:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:14:09.457 04:54:16 -- bdev/nbd_common.sh@41 -- # break 00:14:09.457 04:54:16 -- bdev/nbd_common.sh@45 -- # return 0 00:14:09.457 04:54:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:09.457 04:54:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:14:09.716 04:54:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:14:09.716 04:54:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:14:09.716 04:54:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:14:09.716 04:54:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:09.716 04:54:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:09.716 04:54:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:14:09.716 04:54:16 -- bdev/nbd_common.sh@41 -- # break 00:14:09.716 04:54:16 -- bdev/nbd_common.sh@45 -- # return 0 00:14:09.716 04:54:16 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:09.716 04:54:16 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:09.716 04:54:16 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:09.974 04:54:17 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:09.974 04:54:17 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:09.974 04:54:17 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:10.233 04:54:17 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:10.233 04:54:17 -- bdev/nbd_common.sh@65 -- # echo '' 00:14:10.233 04:54:17 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:10.233 04:54:17 -- bdev/nbd_common.sh@65 -- # true 00:14:10.233 04:54:17 -- bdev/nbd_common.sh@65 -- # count=0 00:14:10.233 04:54:17 -- bdev/nbd_common.sh@66 -- # echo 0 00:14:10.233 04:54:17 -- bdev/nbd_common.sh@104 -- # count=0 00:14:10.233 04:54:17 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:14:10.233 04:54:17 -- bdev/nbd_common.sh@109 -- # return 0 00:14:10.233 04:54:17 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:10.233 04:54:17 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:10.233 04:54:17 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:10.233 04:54:17 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:14:10.233 04:54:17 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:14:10.233 04:54:17 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:14:10.491 malloc_lvol_verify 00:14:10.491 04:54:17 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:14:10.750 6dca6515-4496-4e49-8cf9-cae44fa5d5ed 00:14:10.750 04:54:17 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:14:11.008 da53c58d-386a-44cb-ad4c-4ebacb273f8f 00:14:11.008 04:54:17 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:14:11.008 /dev/nbd0 00:14:11.267 04:54:18 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:14:11.267 mke2fs 1.46.5 (30-Dec-2021) 00:14:11.267 Discarding device blocks: 0/4096 done 00:14:11.267 Creating filesystem with 4096 1k blocks and 1024 inodes 00:14:11.267 00:14:11.267 Allocating group tables: 0/1 done 00:14:11.267 Writing inode tables: 0/1 done 00:14:11.267 Creating journal (1024 blocks): done 00:14:11.267 Writing superblocks and filesystem accounting information: 0/1 done 00:14:11.267 00:14:11.267 04:54:18 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:14:11.267 04:54:18 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:14:11.267 04:54:18 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:11.267 04:54:18 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:11.267 04:54:18 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:11.267 04:54:18 -- bdev/nbd_common.sh@51 -- # local i 00:14:11.267 04:54:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:11.267 04:54:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:11.526 04:54:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:11.526 04:54:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:11.526 04:54:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:11.526 04:54:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:11.526 04:54:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:11.526 04:54:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:11.526 04:54:18 -- bdev/nbd_common.sh@41 -- # break 00:14:11.526 04:54:18 -- bdev/nbd_common.sh@45 -- # return 0 00:14:11.526 04:54:18 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:14:11.526 04:54:18 -- bdev/nbd_common.sh@147 -- # return 0 00:14:11.526 04:54:18 -- bdev/blockdev.sh@324 -- # killprocess 69082 00:14:11.526 04:54:18 -- common/autotest_common.sh@926 -- # '[' -z 69082 ']' 00:14:11.526 04:54:18 -- common/autotest_common.sh@930 -- # kill -0 69082 00:14:11.526 04:54:18 -- common/autotest_common.sh@931 -- # uname 00:14:11.526 04:54:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:11.526 04:54:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69082 00:14:11.526 killing process with pid 69082 00:14:11.526 04:54:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:11.526 04:54:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:11.526 04:54:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69082' 00:14:11.526 04:54:18 -- common/autotest_common.sh@945 -- # kill 69082 00:14:11.526 04:54:18 -- common/autotest_common.sh@950 -- # wait 69082 00:14:12.463 ************************************ 00:14:12.463 END TEST bdev_nbd 00:14:12.463 ************************************ 00:14:12.463 04:54:19 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:14:12.463 00:14:12.463 real 0m12.212s 00:14:12.463 user 0m17.006s 00:14:12.463 sys 0m4.042s 00:14:12.463 04:54:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:12.463 04:54:19 -- common/autotest_common.sh@10 -- # set +x 00:14:12.463 04:54:19 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:14:12.463 04:54:19 -- bdev/blockdev.sh@762 -- # '[' xnvme = nvme ']' 00:14:12.463 04:54:19 -- bdev/blockdev.sh@762 -- # '[' xnvme = gpt ']' 00:14:12.463 04:54:19 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:14:12.722 04:54:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:12.722 04:54:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:12.722 04:54:19 -- common/autotest_common.sh@10 -- # set +x 00:14:12.722 ************************************ 00:14:12.722 START TEST bdev_fio 00:14:12.722 ************************************ 00:14:12.722 04:54:19 -- common/autotest_common.sh@1104 -- # fio_test_suite '' 00:14:12.722 04:54:19 -- bdev/blockdev.sh@329 -- # local env_context 00:14:12.722 04:54:19 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:14:12.722 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:14:12.722 04:54:19 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:14:12.722 04:54:19 -- bdev/blockdev.sh@337 -- # echo '' 00:14:12.722 04:54:19 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:14:12.722 04:54:19 -- bdev/blockdev.sh@337 -- # env_context= 00:14:12.722 04:54:19 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:14:12.722 04:54:19 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:12.722 04:54:19 -- common/autotest_common.sh@1260 -- # local workload=verify 00:14:12.722 04:54:19 -- common/autotest_common.sh@1261 -- # local bdev_type=AIO 00:14:12.722 04:54:19 -- common/autotest_common.sh@1262 -- # local env_context= 00:14:12.722 04:54:19 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:14:12.722 04:54:19 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:14:12.722 04:54:19 -- common/autotest_common.sh@1270 -- # '[' -z verify ']' 00:14:12.722 04:54:19 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:14:12.722 04:54:19 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:12.723 04:54:19 -- common/autotest_common.sh@1280 -- # cat 00:14:12.723 04:54:19 -- common/autotest_common.sh@1292 -- # '[' verify == verify ']' 00:14:12.723 04:54:19 -- common/autotest_common.sh@1293 -- # cat 00:14:12.723 04:54:19 -- common/autotest_common.sh@1302 -- # '[' AIO == AIO ']' 00:14:12.723 04:54:19 -- common/autotest_common.sh@1303 -- # /usr/src/fio/fio --version 00:14:12.723 04:54:19 -- common/autotest_common.sh@1303 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:14:12.723 04:54:19 -- common/autotest_common.sh@1304 -- # echo serialize_overlap=1 00:14:12.723 04:54:19 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:14:12.723 04:54:19 -- bdev/blockdev.sh@340 -- # echo '[job_nvme0n1]' 00:14:12.723 04:54:19 -- bdev/blockdev.sh@341 -- # echo filename=nvme0n1 00:14:12.723 04:54:19 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:14:12.723 04:54:19 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n1]' 00:14:12.723 04:54:19 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n1 00:14:12.723 04:54:19 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:14:12.723 04:54:19 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n2]' 00:14:12.723 04:54:19 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n2 00:14:12.723 04:54:19 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:14:12.723 04:54:19 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n3]' 00:14:12.723 04:54:19 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n3 00:14:12.723 04:54:19 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:14:12.723 04:54:19 -- bdev/blockdev.sh@340 -- # echo '[job_nvme2n1]' 00:14:12.723 04:54:19 -- bdev/blockdev.sh@341 -- # echo filename=nvme2n1 00:14:12.723 04:54:19 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:14:12.723 04:54:19 -- bdev/blockdev.sh@340 -- # echo '[job_nvme3n1]' 00:14:12.723 04:54:19 -- bdev/blockdev.sh@341 -- # echo filename=nvme3n1 00:14:12.723 04:54:19 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:14:12.723 04:54:19 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:12.723 04:54:19 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:14:12.723 04:54:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:12.723 04:54:19 -- common/autotest_common.sh@10 -- # set +x 00:14:12.723 ************************************ 00:14:12.723 START TEST bdev_fio_rw_verify 00:14:12.723 ************************************ 00:14:12.723 04:54:19 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:12.723 04:54:19 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:12.723 04:54:19 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:14:12.723 04:54:19 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:12.723 04:54:19 -- common/autotest_common.sh@1318 -- # local sanitizers 00:14:12.723 04:54:19 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:12.723 04:54:19 -- common/autotest_common.sh@1320 -- # shift 00:14:12.723 04:54:19 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:14:12.723 04:54:19 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:14:12.723 04:54:19 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:12.723 04:54:19 -- common/autotest_common.sh@1324 -- # grep libasan 00:14:12.723 04:54:19 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:14:12.723 04:54:19 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:12.723 04:54:19 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:12.723 04:54:19 -- common/autotest_common.sh@1326 -- # break 00:14:12.723 04:54:19 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:12.723 04:54:19 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:12.981 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:12.981 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:12.981 job_nvme1n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:12.981 job_nvme1n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:12.981 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:12.981 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:12.981 fio-3.35 00:14:12.981 Starting 6 threads 00:14:25.196 00:14:25.196 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=69504: Sun May 12 04:54:30 2024 00:14:25.196 read: IOPS=27.3k, BW=107MiB/s (112MB/s)(1067MiB/10001msec) 00:14:25.196 slat (usec): min=3, max=2376, avg= 7.05, stdev= 6.36 00:14:25.196 clat (usec): min=119, max=8813, avg=689.27, stdev=231.02 00:14:25.196 lat (usec): min=122, max=8821, avg=696.32, stdev=231.67 00:14:25.196 clat percentiles (usec): 00:14:25.196 | 50.000th=[ 709], 99.000th=[ 1287], 99.900th=[ 1795], 99.990th=[ 4015], 00:14:25.196 | 99.999th=[ 8848] 00:14:25.196 write: IOPS=27.6k, BW=108MiB/s (113MB/s)(1078MiB/10001msec); 0 zone resets 00:14:25.196 slat (usec): min=14, max=1250, avg=26.44, stdev=23.42 00:14:25.196 clat (usec): min=93, max=3383, avg=773.76, stdev=238.33 00:14:25.196 lat (usec): min=126, max=3416, avg=800.21, stdev=240.31 00:14:25.196 clat percentiles (usec): 00:14:25.196 | 50.000th=[ 775], 99.000th=[ 1450], 99.900th=[ 1909], 99.990th=[ 2278], 00:14:25.196 | 99.999th=[ 3326] 00:14:25.196 bw ( KiB/s): min=94999, max=133928, per=99.80%, avg=110133.00, stdev=1860.18, samples=114 00:14:25.196 iops : min=23749, max=33482, avg=27533.16, stdev=465.06, samples=114 00:14:25.196 lat (usec) : 100=0.01%, 250=2.07%, 500=13.79%, 750=36.42%, 1000=37.12% 00:14:25.196 lat (msec) : 2=10.54%, 4=0.05%, 10=0.01% 00:14:25.196 cpu : usr=63.55%, sys=24.08%, ctx=7051, majf=0, minf=25262 00:14:25.196 IO depths : 1=12.1%, 2=24.6%, 4=50.4%, 8=12.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:25.196 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:25.196 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:25.196 issued rwts: total=273097,275915,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:25.196 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:25.196 00:14:25.196 Run status group 0 (all jobs): 00:14:25.196 READ: bw=107MiB/s (112MB/s), 107MiB/s-107MiB/s (112MB/s-112MB/s), io=1067MiB (1119MB), run=10001-10001msec 00:14:25.196 WRITE: bw=108MiB/s (113MB/s), 108MiB/s-108MiB/s (113MB/s-113MB/s), io=1078MiB (1130MB), run=10001-10001msec 00:14:25.196 ----------------------------------------------------- 00:14:25.196 Suppressions used: 00:14:25.196 count bytes template 00:14:25.196 6 48 /usr/src/fio/parse.c 00:14:25.196 2637 253152 /usr/src/fio/iolog.c 00:14:25.196 1 8 libtcmalloc_minimal.so 00:14:25.196 1 904 libcrypto.so 00:14:25.196 ----------------------------------------------------- 00:14:25.196 00:14:25.196 ************************************ 00:14:25.196 END TEST bdev_fio_rw_verify 00:14:25.196 ************************************ 00:14:25.196 00:14:25.196 real 0m12.262s 00:14:25.196 user 0m39.999s 00:14:25.196 sys 0m14.865s 00:14:25.196 04:54:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:25.196 04:54:31 -- common/autotest_common.sh@10 -- # set +x 00:14:25.196 04:54:31 -- bdev/blockdev.sh@348 -- # rm -f 00:14:25.196 04:54:31 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:25.196 04:54:31 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:14:25.196 04:54:31 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:25.196 04:54:31 -- common/autotest_common.sh@1260 -- # local workload=trim 00:14:25.196 04:54:31 -- common/autotest_common.sh@1261 -- # local bdev_type= 00:14:25.196 04:54:31 -- common/autotest_common.sh@1262 -- # local env_context= 00:14:25.196 04:54:31 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:14:25.197 04:54:31 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:14:25.197 04:54:31 -- common/autotest_common.sh@1270 -- # '[' -z trim ']' 00:14:25.197 04:54:31 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:14:25.197 04:54:31 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:25.197 04:54:31 -- common/autotest_common.sh@1280 -- # cat 00:14:25.197 04:54:31 -- common/autotest_common.sh@1292 -- # '[' trim == verify ']' 00:14:25.197 04:54:31 -- common/autotest_common.sh@1307 -- # '[' trim == trim ']' 00:14:25.197 04:54:31 -- common/autotest_common.sh@1308 -- # echo rw=trimwrite 00:14:25.197 04:54:31 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:14:25.197 04:54:31 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "2014f0b5-8d5f-43e3-ac4c-69aa5812afb2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "2014f0b5-8d5f-43e3-ac4c-69aa5812afb2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "3d67da52-8a10-49b0-a2ed-e01464217cb5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3d67da52-8a10-49b0-a2ed-e01464217cb5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "fd7779c7-aff3-45b3-b3d1-96b3386c5390"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fd7779c7-aff3-45b3-b3d1-96b3386c5390",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "4189e85c-7e8e-4248-9798-d660a200239e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4189e85c-7e8e-4248-9798-d660a200239e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "19667078-a166-4e74-9ef3-d79891d1e1ba"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "19667078-a166-4e74-9ef3-d79891d1e1ba",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "ca33cb58-e886-4562-9a73-4ed5981f598b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "ca33cb58-e886-4562-9a73-4ed5981f598b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:14:25.197 04:54:32 -- bdev/blockdev.sh@353 -- # [[ -n '' ]] 00:14:25.197 04:54:32 -- bdev/blockdev.sh@359 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:25.197 04:54:32 -- bdev/blockdev.sh@360 -- # popd 00:14:25.197 /home/vagrant/spdk_repo/spdk 00:14:25.197 04:54:32 -- bdev/blockdev.sh@361 -- # trap - SIGINT SIGTERM EXIT 00:14:25.197 04:54:32 -- bdev/blockdev.sh@362 -- # return 0 00:14:25.197 00:14:25.197 real 0m12.442s 00:14:25.197 user 0m40.103s 00:14:25.197 sys 0m14.939s 00:14:25.197 04:54:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:25.197 04:54:32 -- common/autotest_common.sh@10 -- # set +x 00:14:25.197 ************************************ 00:14:25.197 END TEST bdev_fio 00:14:25.197 ************************************ 00:14:25.197 04:54:32 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:25.197 04:54:32 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:25.197 04:54:32 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:14:25.197 04:54:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:25.197 04:54:32 -- common/autotest_common.sh@10 -- # set +x 00:14:25.197 ************************************ 00:14:25.197 START TEST bdev_verify 00:14:25.197 ************************************ 00:14:25.197 04:54:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:25.197 [2024-05-12 04:54:32.196596] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:25.197 [2024-05-12 04:54:32.196908] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69683 ] 00:14:25.455 [2024-05-12 04:54:32.378174] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:25.455 [2024-05-12 04:54:32.573380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.455 [2024-05-12 04:54:32.573396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:26.022 Running I/O for 5 seconds... 00:14:31.289 00:14:31.289 Latency(us) 00:14:31.289 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:31.289 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:31.289 Verification LBA range: start 0x0 length 0x20000 00:14:31.289 nvme0n1 : 5.07 2507.69 9.80 0.00 0.00 50832.48 5421.61 63391.19 00:14:31.289 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:31.289 Verification LBA range: start 0x20000 length 0x20000 00:14:31.289 nvme0n1 : 5.07 2588.51 10.11 0.00 0.00 49144.83 6374.87 61961.31 00:14:31.289 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:31.289 Verification LBA range: start 0x0 length 0x80000 00:14:31.289 nvme1n1 : 5.08 2612.66 10.21 0.00 0.00 48678.73 15371.17 71493.82 00:14:31.289 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:31.289 Verification LBA range: start 0x80000 length 0x80000 00:14:31.289 nvme1n1 : 5.08 2513.44 9.82 0.00 0.00 50495.52 4766.25 67680.81 00:14:31.289 Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:31.289 Verification LBA range: start 0x0 length 0x80000 00:14:31.289 nvme1n2 : 5.07 2616.96 10.22 0.00 0.00 48645.81 14179.61 60293.12 00:14:31.289 Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:31.289 Verification LBA range: start 0x80000 length 0x80000 00:14:31.289 nvme1n2 : 5.08 2545.38 9.94 0.00 0.00 49913.59 3247.01 65774.31 00:14:31.289 Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:31.289 Verification LBA range: start 0x0 length 0x80000 00:14:31.289 nvme1n3 : 5.08 2571.19 10.04 0.00 0.00 49403.99 15192.44 66250.94 00:14:31.289 Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:31.289 Verification LBA range: start 0x80000 length 0x80000 00:14:31.289 nvme1n3 : 5.09 2526.87 9.87 0.00 0.00 50212.33 2919.33 68157.44 00:14:31.289 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:31.289 Verification LBA range: start 0x0 length 0xbd0bd 00:14:31.289 nvme2n1 : 5.07 2953.32 11.54 0.00 0.00 43043.51 5093.93 63867.81 00:14:31.289 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:31.289 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:14:31.289 nvme2n1 : 5.09 2854.04 11.15 0.00 0.00 44407.29 4944.99 62914.56 00:14:31.289 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:31.289 Verification LBA range: start 0x0 length 0xa0000 00:14:31.289 nvme3n1 : 5.08 2539.60 9.92 0.00 0.00 49926.55 3142.75 65774.31 00:14:31.289 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:31.289 Verification LBA range: start 0xa0000 length 0xa0000 00:14:31.289 nvme3n1 : 5.09 2456.59 9.60 0.00 0.00 51523.73 11915.64 67680.81 00:14:31.289 =================================================================================================================== 00:14:31.289 Total : 31286.25 122.21 0.00 0.00 48720.64 2919.33 71493.82 00:14:32.226 00:14:32.226 real 0m7.113s 00:14:32.226 user 0m9.333s 00:14:32.226 sys 0m3.194s 00:14:32.226 04:54:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:32.226 04:54:39 -- common/autotest_common.sh@10 -- # set +x 00:14:32.226 ************************************ 00:14:32.226 END TEST bdev_verify 00:14:32.226 ************************************ 00:14:32.226 04:54:39 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:32.226 04:54:39 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:14:32.226 04:54:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:32.226 04:54:39 -- common/autotest_common.sh@10 -- # set +x 00:14:32.226 ************************************ 00:14:32.226 START TEST bdev_verify_big_io 00:14:32.226 ************************************ 00:14:32.226 04:54:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:32.484 [2024-05-12 04:54:39.357469] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:32.484 [2024-05-12 04:54:39.357642] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69784 ] 00:14:32.484 [2024-05-12 04:54:39.529916] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:32.743 [2024-05-12 04:54:39.719984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.743 [2024-05-12 04:54:39.720001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:33.308 Running I/O for 5 seconds... 00:14:39.871 00:14:39.871 Latency(us) 00:14:39.871 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.871 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:39.871 Verification LBA range: start 0x0 length 0x2000 00:14:39.871 nvme0n1 : 5.62 247.56 15.47 0.00 0.00 496259.01 53143.74 754974.72 00:14:39.871 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:39.871 Verification LBA range: start 0x2000 length 0x2000 00:14:39.871 nvme0n1 : 5.63 247.04 15.44 0.00 0.00 499368.56 59816.49 705405.67 00:14:39.871 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:39.871 Verification LBA range: start 0x0 length 0x8000 00:14:39.871 nvme1n1 : 5.53 251.60 15.72 0.00 0.00 482071.95 56480.12 686340.65 00:14:39.871 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:39.871 Verification LBA range: start 0x8000 length 0x8000 00:14:39.871 nvme1n1 : 5.75 225.56 14.10 0.00 0.00 540926.53 67204.19 934185.89 00:14:39.871 Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:39.871 Verification LBA range: start 0x0 length 0x8000 00:14:39.871 nvme1n2 : 5.68 245.00 15.31 0.00 0.00 485063.28 57195.05 652023.62 00:14:39.871 Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:39.871 Verification LBA range: start 0x8000 length 0x8000 00:14:39.871 nvme1n2 : 5.73 242.55 15.16 0.00 0.00 491108.73 64821.06 663462.63 00:14:39.871 Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:39.871 Verification LBA range: start 0x0 length 0x8000 00:14:39.871 nvme1n3 : 5.74 257.71 16.11 0.00 0.00 450897.76 59339.87 575763.55 00:14:39.871 Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:39.871 Verification LBA range: start 0x8000 length 0x8000 00:14:39.871 nvme1n3 : 5.75 256.94 16.06 0.00 0.00 450092.61 64344.44 413710.89 00:14:39.871 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:39.871 Verification LBA range: start 0x0 length 0xbd0b 00:14:39.871 nvme2n1 : 5.77 271.49 16.97 0.00 0.00 419183.57 31218.97 530007.51 00:14:39.871 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:39.871 Verification LBA range: start 0xbd0b length 0xbd0b 00:14:39.871 nvme2n1 : 5.76 263.36 16.46 0.00 0.00 432496.72 4885.41 402271.88 00:14:39.871 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:39.871 Verification LBA range: start 0x0 length 0xa000 00:14:39.871 nvme3n1 : 5.78 302.59 18.91 0.00 0.00 368697.48 2278.87 644397.61 00:14:39.871 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:39.871 Verification LBA range: start 0xa000 length 0xa000 00:14:39.871 nvme3n1 : 5.76 256.72 16.05 0.00 0.00 437900.95 8757.99 552885.53 00:14:39.871 =================================================================================================================== 00:14:39.871 Total : 3068.13 191.76 0.00 0.00 459598.11 2278.87 934185.89 00:14:40.438 00:14:40.438 real 0m8.091s 00:14:40.438 user 0m14.389s 00:14:40.438 sys 0m0.658s 00:14:40.438 04:54:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:40.438 04:54:47 -- common/autotest_common.sh@10 -- # set +x 00:14:40.438 ************************************ 00:14:40.438 END TEST bdev_verify_big_io 00:14:40.438 ************************************ 00:14:40.438 04:54:47 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:40.438 04:54:47 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:14:40.438 04:54:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:40.438 04:54:47 -- common/autotest_common.sh@10 -- # set +x 00:14:40.438 ************************************ 00:14:40.438 START TEST bdev_write_zeroes 00:14:40.438 ************************************ 00:14:40.438 04:54:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:40.438 [2024-05-12 04:54:47.493464] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:40.438 [2024-05-12 04:54:47.493633] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69894 ] 00:14:40.697 [2024-05-12 04:54:47.663868] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.956 [2024-05-12 04:54:47.840686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.215 Running I/O for 1 seconds... 00:14:42.152 00:14:42.152 Latency(us) 00:14:42.152 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:42.152 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:42.152 nvme0n1 : 1.01 11269.82 44.02 0.00 0.00 11347.22 7060.01 19422.49 00:14:42.152 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:42.152 nvme1n1 : 1.01 11256.52 43.97 0.00 0.00 11351.23 7089.80 20137.43 00:14:42.152 Job: nvme1n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:42.152 nvme1n2 : 1.01 11244.36 43.92 0.00 0.00 11355.67 7030.23 20375.74 00:14:42.152 Job: nvme1n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:42.152 nvme1n3 : 1.01 11313.06 44.19 0.00 0.00 11275.76 6762.12 20137.43 00:14:42.152 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:42.152 nvme2n1 : 1.01 16437.71 64.21 0.00 0.00 7744.54 3127.85 14477.50 00:14:42.152 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:42.152 nvme3n1 : 1.02 11219.32 43.83 0.00 0.00 11307.31 7000.44 20018.27 00:14:42.152 =================================================================================================================== 00:14:42.152 Total : 72740.80 284.14 0.00 0.00 10516.18 3127.85 20375.74 00:14:43.527 00:14:43.527 real 0m2.880s 00:14:43.527 user 0m2.161s 00:14:43.527 sys 0m0.528s 00:14:43.527 04:54:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:43.527 04:54:50 -- common/autotest_common.sh@10 -- # set +x 00:14:43.527 ************************************ 00:14:43.527 END TEST bdev_write_zeroes 00:14:43.527 ************************************ 00:14:43.527 04:54:50 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:43.527 04:54:50 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:14:43.527 04:54:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:43.527 04:54:50 -- common/autotest_common.sh@10 -- # set +x 00:14:43.527 ************************************ 00:14:43.527 START TEST bdev_json_nonenclosed 00:14:43.527 ************************************ 00:14:43.527 04:54:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:43.527 [2024-05-12 04:54:50.420258] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:43.527 [2024-05-12 04:54:50.420466] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69941 ] 00:14:43.527 [2024-05-12 04:54:50.592601] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.786 [2024-05-12 04:54:50.762977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.786 [2024-05-12 04:54:50.763191] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:14:43.786 [2024-05-12 04:54:50.763220] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:44.044 00:14:44.044 real 0m0.787s 00:14:44.044 user 0m0.554s 00:14:44.044 sys 0m0.128s 00:14:44.044 04:54:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:44.044 ************************************ 00:14:44.044 END TEST bdev_json_nonenclosed 00:14:44.044 ************************************ 00:14:44.044 04:54:51 -- common/autotest_common.sh@10 -- # set +x 00:14:44.044 04:54:51 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:44.044 04:54:51 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:14:44.044 04:54:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:44.044 04:54:51 -- common/autotest_common.sh@10 -- # set +x 00:14:44.044 ************************************ 00:14:44.044 START TEST bdev_json_nonarray 00:14:44.044 ************************************ 00:14:44.044 04:54:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:44.302 [2024-05-12 04:54:51.239577] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:44.302 [2024-05-12 04:54:51.239744] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69971 ] 00:14:44.302 [2024-05-12 04:54:51.401110] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.560 [2024-05-12 04:54:51.577270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.560 [2024-05-12 04:54:51.577510] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:14:44.560 [2024-05-12 04:54:51.577539] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:44.818 00:14:44.818 real 0m0.777s 00:14:44.818 user 0m0.560s 00:14:44.818 sys 0m0.113s 00:14:44.818 04:54:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:44.818 04:54:51 -- common/autotest_common.sh@10 -- # set +x 00:14:44.818 ************************************ 00:14:44.818 END TEST bdev_json_nonarray 00:14:44.818 ************************************ 00:14:45.077 04:54:51 -- bdev/blockdev.sh@785 -- # [[ xnvme == bdev ]] 00:14:45.077 04:54:51 -- bdev/blockdev.sh@792 -- # [[ xnvme == gpt ]] 00:14:45.077 04:54:51 -- bdev/blockdev.sh@796 -- # [[ xnvme == crypto_sw ]] 00:14:45.077 04:54:51 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:14:45.077 04:54:51 -- bdev/blockdev.sh@809 -- # cleanup 00:14:45.077 04:54:51 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:14:45.077 04:54:51 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:45.077 04:54:51 -- bdev/blockdev.sh@24 -- # [[ xnvme == rbd ]] 00:14:45.077 04:54:51 -- bdev/blockdev.sh@28 -- # [[ xnvme == daos ]] 00:14:45.077 04:54:51 -- bdev/blockdev.sh@32 -- # [[ xnvme = \g\p\t ]] 00:14:45.077 04:54:51 -- bdev/blockdev.sh@38 -- # [[ xnvme == xnvme ]] 00:14:45.077 04:54:51 -- bdev/blockdev.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:46.010 lsblk: /dev/nvme0c0n1: not a block device 00:14:46.010 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:00.895 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:15:00.895 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:15:00.895 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:15:00.895 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:15:00.895 00:15:00.895 real 1m14.147s 00:15:00.895 user 1m43.955s 00:15:00.895 sys 0m50.445s 00:15:00.895 04:55:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:00.895 ************************************ 00:15:00.895 END TEST blockdev_xnvme 00:15:00.895 ************************************ 00:15:00.895 04:55:06 -- common/autotest_common.sh@10 -- # set +x 00:15:00.895 04:55:06 -- spdk/autotest.sh@259 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:00.895 04:55:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:00.895 04:55:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:00.895 04:55:06 -- common/autotest_common.sh@10 -- # set +x 00:15:00.895 ************************************ 00:15:00.895 START TEST ublk 00:15:00.895 ************************************ 00:15:00.895 04:55:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:00.895 * Looking for test storage... 00:15:00.895 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:00.895 04:55:06 -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:00.895 04:55:06 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:00.895 04:55:06 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:00.895 04:55:06 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:00.895 04:55:06 -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:00.895 04:55:06 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:00.895 04:55:06 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:00.895 04:55:06 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:00.895 04:55:06 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:00.895 04:55:06 -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:15:00.895 04:55:06 -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:15:00.895 04:55:06 -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:15:00.895 04:55:06 -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:15:00.895 04:55:06 -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:15:00.895 04:55:06 -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:15:00.895 04:55:06 -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:15:00.895 04:55:06 -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:15:00.895 04:55:06 -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:15:00.895 04:55:06 -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:15:00.895 04:55:06 -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:15:00.895 04:55:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:00.895 04:55:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:00.895 04:55:06 -- common/autotest_common.sh@10 -- # set +x 00:15:00.895 ************************************ 00:15:00.895 START TEST test_save_ublk_config 00:15:00.895 ************************************ 00:15:00.895 04:55:06 -- common/autotest_common.sh@1104 -- # test_save_config 00:15:00.895 04:55:06 -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:15:00.895 04:55:06 -- ublk/ublk.sh@103 -- # tgtpid=70337 00:15:00.895 04:55:06 -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:15:00.895 04:55:06 -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:15:00.895 04:55:06 -- ublk/ublk.sh@106 -- # waitforlisten 70337 00:15:00.895 04:55:06 -- common/autotest_common.sh@819 -- # '[' -z 70337 ']' 00:15:00.895 04:55:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.895 04:55:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:00.895 04:55:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.895 04:55:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:00.895 04:55:06 -- common/autotest_common.sh@10 -- # set +x 00:15:00.895 [2024-05-12 04:55:07.015704] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:00.895 [2024-05-12 04:55:07.015898] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70337 ] 00:15:00.895 [2024-05-12 04:55:07.187811] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.895 [2024-05-12 04:55:07.424724] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:00.895 [2024-05-12 04:55:07.425000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.843 04:55:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:01.843 04:55:08 -- common/autotest_common.sh@852 -- # return 0 00:15:01.843 04:55:08 -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:15:01.843 04:55:08 -- ublk/ublk.sh@108 -- # rpc_cmd 00:15:01.843 04:55:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:01.843 04:55:08 -- common/autotest_common.sh@10 -- # set +x 00:15:01.843 [2024-05-12 04:55:08.640428] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:01.843 malloc0 00:15:01.843 [2024-05-12 04:55:08.712419] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:01.843 [2024-05-12 04:55:08.712545] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:01.843 [2024-05-12 04:55:08.712564] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:01.843 [2024-05-12 04:55:08.712575] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:01.843 [2024-05-12 04:55:08.720328] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:01.843 [2024-05-12 04:55:08.720368] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:01.843 [2024-05-12 04:55:08.728322] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:01.843 [2024-05-12 04:55:08.728457] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:01.843 [2024-05-12 04:55:08.752315] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:01.843 0 00:15:01.843 04:55:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:01.843 04:55:08 -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:15:01.843 04:55:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:01.843 04:55:08 -- common/autotest_common.sh@10 -- # set +x 00:15:02.101 04:55:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:02.101 04:55:08 -- ublk/ublk.sh@115 -- # config='{ 00:15:02.101 "subsystems": [ 00:15:02.101 { 00:15:02.101 "subsystem": "iobuf", 00:15:02.101 "config": [ 00:15:02.101 { 00:15:02.101 "method": "iobuf_set_options", 00:15:02.101 "params": { 00:15:02.101 "small_pool_count": 8192, 00:15:02.101 "large_pool_count": 1024, 00:15:02.101 "small_bufsize": 8192, 00:15:02.101 "large_bufsize": 135168 00:15:02.101 } 00:15:02.101 } 00:15:02.101 ] 00:15:02.101 }, 00:15:02.101 { 00:15:02.101 "subsystem": "sock", 00:15:02.101 "config": [ 00:15:02.101 { 00:15:02.101 "method": "sock_impl_set_options", 00:15:02.101 "params": { 00:15:02.101 "impl_name": "posix", 00:15:02.101 "recv_buf_size": 2097152, 00:15:02.101 "send_buf_size": 2097152, 00:15:02.101 "enable_recv_pipe": true, 00:15:02.101 "enable_quickack": false, 00:15:02.101 "enable_placement_id": 0, 00:15:02.102 "enable_zerocopy_send_server": true, 00:15:02.102 "enable_zerocopy_send_client": false, 00:15:02.102 "zerocopy_threshold": 0, 00:15:02.102 "tls_version": 0, 00:15:02.102 "enable_ktls": false 00:15:02.102 } 00:15:02.102 }, 00:15:02.102 { 00:15:02.102 "method": "sock_impl_set_options", 00:15:02.102 "params": { 00:15:02.102 "impl_name": "ssl", 00:15:02.102 "recv_buf_size": 4096, 00:15:02.102 "send_buf_size": 4096, 00:15:02.102 "enable_recv_pipe": true, 00:15:02.102 "enable_quickack": false, 00:15:02.102 "enable_placement_id": 0, 00:15:02.102 "enable_zerocopy_send_server": true, 00:15:02.102 "enable_zerocopy_send_client": false, 00:15:02.102 "zerocopy_threshold": 0, 00:15:02.102 "tls_version": 0, 00:15:02.102 "enable_ktls": false 00:15:02.102 } 00:15:02.102 } 00:15:02.102 ] 00:15:02.102 }, 00:15:02.102 { 00:15:02.102 "subsystem": "vmd", 00:15:02.102 "config": [] 00:15:02.102 }, 00:15:02.102 { 00:15:02.102 "subsystem": "accel", 00:15:02.102 "config": [ 00:15:02.102 { 00:15:02.102 "method": "accel_set_options", 00:15:02.102 "params": { 00:15:02.102 "small_cache_size": 128, 00:15:02.102 "large_cache_size": 16, 00:15:02.102 "task_count": 2048, 00:15:02.102 "sequence_count": 2048, 00:15:02.102 "buf_count": 2048 00:15:02.102 } 00:15:02.102 } 00:15:02.102 ] 00:15:02.102 }, 00:15:02.102 { 00:15:02.102 "subsystem": "bdev", 00:15:02.102 "config": [ 00:15:02.102 { 00:15:02.102 "method": "bdev_set_options", 00:15:02.102 "params": { 00:15:02.102 "bdev_io_pool_size": 65535, 00:15:02.102 "bdev_io_cache_size": 256, 00:15:02.102 "bdev_auto_examine": true, 00:15:02.102 "iobuf_small_cache_size": 128, 00:15:02.102 "iobuf_large_cache_size": 16 00:15:02.102 } 00:15:02.102 }, 00:15:02.102 { 00:15:02.102 "method": "bdev_raid_set_options", 00:15:02.102 "params": { 00:15:02.102 "process_window_size_kb": 1024 00:15:02.102 } 00:15:02.102 }, 00:15:02.102 { 00:15:02.102 "method": "bdev_iscsi_set_options", 00:15:02.102 "params": { 00:15:02.102 "timeout_sec": 30 00:15:02.102 } 00:15:02.102 }, 00:15:02.102 { 00:15:02.102 "method": "bdev_nvme_set_options", 00:15:02.102 "params": { 00:15:02.102 "action_on_timeout": "none", 00:15:02.102 "timeout_us": 0, 00:15:02.102 "timeout_admin_us": 0, 00:15:02.102 "keep_alive_timeout_ms": 10000, 00:15:02.102 "transport_retry_count": 4, 00:15:02.102 "arbitration_burst": 0, 00:15:02.102 "low_priority_weight": 0, 00:15:02.102 "medium_priority_weight": 0, 00:15:02.102 "high_priority_weight": 0, 00:15:02.102 "nvme_adminq_poll_period_us": 10000, 00:15:02.102 "nvme_ioq_poll_period_us": 0, 00:15:02.102 "io_queue_requests": 0, 00:15:02.102 "delay_cmd_submit": true, 00:15:02.102 "bdev_retry_count": 3, 00:15:02.102 "transport_ack_timeout": 0, 00:15:02.102 "ctrlr_loss_timeout_sec": 0, 00:15:02.102 "reconnect_delay_sec": 0, 00:15:02.102 "fast_io_fail_timeout_sec": 0, 00:15:02.102 "generate_uuids": false, 00:15:02.102 "transport_tos": 0, 00:15:02.102 "io_path_stat": false, 00:15:02.102 "allow_accel_sequence": false 00:15:02.102 } 00:15:02.102 }, 00:15:02.102 { 00:15:02.102 "method": "bdev_nvme_set_hotplug", 00:15:02.102 "params": { 00:15:02.102 "period_us": 100000, 00:15:02.102 "enable": false 00:15:02.102 } 00:15:02.102 }, 00:15:02.102 { 00:15:02.102 "method": "bdev_malloc_create", 00:15:02.102 "params": { 00:15:02.102 "name": "malloc0", 00:15:02.102 "num_blocks": 8192, 00:15:02.102 "block_size": 4096, 00:15:02.102 "physical_block_size": 4096, 00:15:02.102 "uuid": "4c684f83-04e0-4a7b-9c3e-c9cb75b4b88f", 00:15:02.102 "optimal_io_boundary": 0 00:15:02.102 } 00:15:02.102 }, 00:15:02.102 { 00:15:02.102 "method": "bdev_wait_for_examine" 00:15:02.102 } 00:15:02.102 ] 00:15:02.102 }, 00:15:02.102 { 00:15:02.102 "subsystem": "scsi", 00:15:02.102 "config": null 00:15:02.102 }, 00:15:02.102 { 00:15:02.102 "subsystem": "scheduler", 00:15:02.102 "config": [ 00:15:02.102 { 00:15:02.102 "method": "framework_set_scheduler", 00:15:02.102 "params": { 00:15:02.102 "name": "static" 00:15:02.102 } 00:15:02.102 } 00:15:02.102 ] 00:15:02.102 }, 00:15:02.102 { 00:15:02.102 "subsystem": "vhost_scsi", 00:15:02.102 "config": [] 00:15:02.102 }, 00:15:02.102 { 00:15:02.102 "subsystem": "vhost_blk", 00:15:02.102 "config": [] 00:15:02.102 }, 00:15:02.102 { 00:15:02.102 "subsystem": "ublk", 00:15:02.102 "config": [ 00:15:02.102 { 00:15:02.102 "method": "ublk_create_target", 00:15:02.102 "params": { 00:15:02.102 "cpumask": "1" 00:15:02.102 } 00:15:02.102 }, 00:15:02.102 { 00:15:02.102 "method": "ublk_start_disk", 00:15:02.102 "params": { 00:15:02.102 "bdev_name": "malloc0", 00:15:02.102 "ublk_id": 0, 00:15:02.102 "num_queues": 1, 00:15:02.102 "queue_depth": 128 00:15:02.102 } 00:15:02.102 } 00:15:02.102 ] 00:15:02.102 }, 00:15:02.102 { 00:15:02.102 "subsystem": "nbd", 00:15:02.102 "config": [] 00:15:02.102 }, 00:15:02.102 { 00:15:02.102 "subsystem": "nvmf", 00:15:02.102 "config": [ 00:15:02.102 { 00:15:02.102 "method": "nvmf_set_config", 00:15:02.102 "params": { 00:15:02.102 "discovery_filter": "match_any", 00:15:02.102 "admin_cmd_passthru": { 00:15:02.102 "identify_ctrlr": false 00:15:02.102 } 00:15:02.102 } 00:15:02.102 }, 00:15:02.102 { 00:15:02.102 "method": "nvmf_set_max_subsystems", 00:15:02.102 "params": { 00:15:02.102 "max_subsystems": 1024 00:15:02.102 } 00:15:02.102 }, 00:15:02.102 { 00:15:02.102 "method": "nvmf_set_crdt", 00:15:02.102 "params": { 00:15:02.102 "crdt1": 0, 00:15:02.102 "crdt2": 0, 00:15:02.102 "crdt3": 0 00:15:02.102 } 00:15:02.102 } 00:15:02.102 ] 00:15:02.102 }, 00:15:02.102 { 00:15:02.102 "subsystem": "iscsi", 00:15:02.102 "config": [ 00:15:02.102 { 00:15:02.102 "method": "iscsi_set_options", 00:15:02.102 "params": { 00:15:02.102 "node_base": "iqn.2016-06.io.spdk", 00:15:02.102 "max_sessions": 128, 00:15:02.102 "max_connections_per_session": 2, 00:15:02.102 "max_queue_depth": 64, 00:15:02.102 "default_time2wait": 2, 00:15:02.102 "default_time2retain": 20, 00:15:02.102 "first_burst_length": 8192, 00:15:02.102 "immediate_data": true, 00:15:02.102 "allow_duplicated_isid": false, 00:15:02.102 "error_recovery_level": 0, 00:15:02.102 "nop_timeout": 60, 00:15:02.102 "nop_in_interval": 30, 00:15:02.102 "disable_chap": false, 00:15:02.102 "require_chap": false, 00:15:02.102 "mutual_chap": false, 00:15:02.102 "chap_group": 0, 00:15:02.102 "max_large_datain_per_connection": 64, 00:15:02.102 "max_r2t_per_connection": 4, 00:15:02.102 "pdu_pool_size": 36864, 00:15:02.102 "immediate_data_pool_size": 16384, 00:15:02.102 "data_out_pool_size": 2048 00:15:02.102 } 00:15:02.102 } 00:15:02.102 ] 00:15:02.102 } 00:15:02.102 ] 00:15:02.102 }' 00:15:02.102 04:55:08 -- ublk/ublk.sh@116 -- # killprocess 70337 00:15:02.102 04:55:08 -- common/autotest_common.sh@926 -- # '[' -z 70337 ']' 00:15:02.102 04:55:08 -- common/autotest_common.sh@930 -- # kill -0 70337 00:15:02.102 04:55:08 -- common/autotest_common.sh@931 -- # uname 00:15:02.102 04:55:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:02.102 04:55:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70337 00:15:02.102 killing process with pid 70337 00:15:02.102 04:55:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:02.102 04:55:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:02.102 04:55:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70337' 00:15:02.102 04:55:09 -- common/autotest_common.sh@945 -- # kill 70337 00:15:02.102 04:55:09 -- common/autotest_common.sh@950 -- # wait 70337 00:15:04.016 [2024-05-12 04:55:10.644890] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:04.016 [2024-05-12 04:55:10.685322] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:04.016 [2024-05-12 04:55:10.685527] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:04.016 [2024-05-12 04:55:10.693260] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:04.016 [2024-05-12 04:55:10.693325] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:04.016 [2024-05-12 04:55:10.693339] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:04.016 [2024-05-12 04:55:10.693379] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:15:04.016 [2024-05-12 04:55:10.693571] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:15:04.953 04:55:11 -- ublk/ublk.sh@119 -- # tgtpid=70406 00:15:04.953 04:55:11 -- ublk/ublk.sh@121 -- # waitforlisten 70406 00:15:04.953 04:55:11 -- common/autotest_common.sh@819 -- # '[' -z 70406 ']' 00:15:04.953 04:55:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:04.953 04:55:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:04.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:04.953 04:55:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:04.953 04:55:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:04.953 04:55:11 -- common/autotest_common.sh@10 -- # set +x 00:15:04.953 04:55:11 -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:15:04.953 04:55:11 -- ublk/ublk.sh@118 -- # echo '{ 00:15:04.953 "subsystems": [ 00:15:04.953 { 00:15:04.953 "subsystem": "iobuf", 00:15:04.953 "config": [ 00:15:04.953 { 00:15:04.953 "method": "iobuf_set_options", 00:15:04.953 "params": { 00:15:04.953 "small_pool_count": 8192, 00:15:04.953 "large_pool_count": 1024, 00:15:04.953 "small_bufsize": 8192, 00:15:04.953 "large_bufsize": 135168 00:15:04.953 } 00:15:04.953 } 00:15:04.953 ] 00:15:04.953 }, 00:15:04.953 { 00:15:04.953 "subsystem": "sock", 00:15:04.953 "config": [ 00:15:04.953 { 00:15:04.953 "method": "sock_impl_set_options", 00:15:04.953 "params": { 00:15:04.953 "impl_name": "posix", 00:15:04.953 "recv_buf_size": 2097152, 00:15:04.954 "send_buf_size": 2097152, 00:15:04.954 "enable_recv_pipe": true, 00:15:04.954 "enable_quickack": false, 00:15:04.954 "enable_placement_id": 0, 00:15:04.954 "enable_zerocopy_send_server": true, 00:15:04.954 "enable_zerocopy_send_client": false, 00:15:04.954 "zerocopy_threshold": 0, 00:15:04.954 "tls_version": 0, 00:15:04.954 "enable_ktls": false 00:15:04.954 } 00:15:04.954 }, 00:15:04.954 { 00:15:04.954 "method": "sock_impl_set_options", 00:15:04.954 "params": { 00:15:04.954 "impl_name": "ssl", 00:15:04.954 "recv_buf_size": 4096, 00:15:04.954 "send_buf_size": 4096, 00:15:04.954 "enable_recv_pipe": true, 00:15:04.954 "enable_quickack": false, 00:15:04.954 "enable_placement_id": 0, 00:15:04.954 "enable_zerocopy_send_server": true, 00:15:04.954 "enable_zerocopy_send_client": false, 00:15:04.954 "zerocopy_threshold": 0, 00:15:04.954 "tls_version": 0, 00:15:04.954 "enable_ktls": false 00:15:04.954 } 00:15:04.954 } 00:15:04.954 ] 00:15:04.954 }, 00:15:04.954 { 00:15:04.954 "subsystem": "vmd", 00:15:04.954 "config": [] 00:15:04.954 }, 00:15:04.954 { 00:15:04.954 "subsystem": "accel", 00:15:04.954 "config": [ 00:15:04.954 { 00:15:04.954 "method": "accel_set_options", 00:15:04.954 "params": { 00:15:04.954 "small_cache_size": 128, 00:15:04.954 "large_cache_size": 16, 00:15:04.954 "task_count": 2048, 00:15:04.954 "sequence_count": 2048, 00:15:04.954 "buf_count": 2048 00:15:04.954 } 00:15:04.954 } 00:15:04.954 ] 00:15:04.954 }, 00:15:04.954 { 00:15:04.954 "subsystem": "bdev", 00:15:04.954 "config": [ 00:15:04.954 { 00:15:04.954 "method": "bdev_set_options", 00:15:04.954 "params": { 00:15:04.954 "bdev_io_pool_size": 65535, 00:15:04.954 "bdev_io_cache_size": 256, 00:15:04.954 "bdev_auto_examine": true, 00:15:04.954 "iobuf_small_cache_size": 128, 00:15:04.954 "iobuf_large_cache_size": 16 00:15:04.954 } 00:15:04.954 }, 00:15:04.954 { 00:15:04.954 "method": "bdev_raid_set_options", 00:15:04.954 "params": { 00:15:04.954 "process_window_size_kb": 1024 00:15:04.954 } 00:15:04.954 }, 00:15:04.954 { 00:15:04.954 "method": "bdev_iscsi_set_options", 00:15:04.954 "params": { 00:15:04.954 "timeout_sec": 30 00:15:04.954 } 00:15:04.954 }, 00:15:04.954 { 00:15:04.954 "method": "bdev_nvme_set_options", 00:15:04.954 "params": { 00:15:04.954 "action_on_timeout": "none", 00:15:04.954 "timeout_us": 0, 00:15:04.954 "timeout_admin_us": 0, 00:15:04.954 "keep_alive_timeout_ms": 10000, 00:15:04.954 "transport_retry_count": 4, 00:15:04.954 "arbitration_burst": 0, 00:15:04.954 "low_priority_weight": 0, 00:15:04.954 "medium_priority_weight": 0, 00:15:04.954 "high_priority_weight": 0, 00:15:04.954 "nvme_adminq_poll_period_us": 10000, 00:15:04.954 "nvme_ioq_poll_period_us": 0, 00:15:04.954 "io_queue_requests": 0, 00:15:04.954 "delay_cmd_submit": true, 00:15:04.954 "bdev_retry_count": 3, 00:15:04.954 "transport_ack_timeout": 0, 00:15:04.954 "ctrlr_loss_timeout_sec": 0, 00:15:04.954 "reconnect_delay_sec": 0, 00:15:04.954 "fast_io_fail_timeout_sec": 0, 00:15:04.954 "generate_uuids": false, 00:15:04.954 "transport_tos": 0, 00:15:04.954 "io_path_stat": false, 00:15:04.954 "allow_accel_sequence": false 00:15:04.954 } 00:15:04.954 }, 00:15:04.954 { 00:15:04.954 "method": "bdev_nvme_set_hotplug", 00:15:04.954 "params": { 00:15:04.954 "period_us": 100000, 00:15:04.954 "enable": false 00:15:04.954 } 00:15:04.954 }, 00:15:04.954 { 00:15:04.954 "method": "bdev_malloc_create", 00:15:04.954 "params": { 00:15:04.954 "name": "malloc0", 00:15:04.954 "num_blocks": 8192, 00:15:04.954 "block_size": 4096, 00:15:04.954 "physical_block_size": 4096, 00:15:04.954 "uuid": "4c684f83-04e0-4a7b-9c3e-c9cb75b4b88f", 00:15:04.954 "optimal_io_boundary": 0 00:15:04.954 } 00:15:04.954 }, 00:15:04.954 { 00:15:04.954 "method": "bdev_wait_for_examine" 00:15:04.954 } 00:15:04.954 ] 00:15:04.954 }, 00:15:04.954 { 00:15:04.954 "subsystem": "scsi", 00:15:04.954 "config": null 00:15:04.954 }, 00:15:04.954 { 00:15:04.954 "subsystem": "scheduler", 00:15:04.954 "config": [ 00:15:04.954 { 00:15:04.954 "method": "framework_set_scheduler", 00:15:04.954 "params": { 00:15:04.954 "name": "static" 00:15:04.954 } 00:15:04.954 } 00:15:04.954 ] 00:15:04.954 }, 00:15:04.954 { 00:15:04.954 "subsystem": "vhost_scsi", 00:15:04.954 "config": [] 00:15:04.954 }, 00:15:04.954 { 00:15:04.954 "subsystem": "vhost_blk", 00:15:04.954 "config": [] 00:15:04.954 }, 00:15:04.954 { 00:15:04.954 "subsystem": "ublk", 00:15:04.954 "config": [ 00:15:04.954 { 00:15:04.954 "method": "ublk_create_target", 00:15:04.954 "params": { 00:15:04.954 "cpumask": "1" 00:15:04.954 } 00:15:04.954 }, 00:15:04.954 { 00:15:04.954 "method": "ublk_start_disk", 00:15:04.954 "params": { 00:15:04.954 "bdev_name": "malloc0", 00:15:04.954 "ublk_id": 0, 00:15:04.954 "num_queues": 1, 00:15:04.954 "queue_depth": 128 00:15:04.954 } 00:15:04.954 } 00:15:04.954 ] 00:15:04.954 }, 00:15:04.954 { 00:15:04.954 "subsystem": "nbd", 00:15:04.954 "config": [] 00:15:04.954 }, 00:15:04.954 { 00:15:04.954 "subsystem": "nvmf", 00:15:04.954 "config": [ 00:15:04.954 { 00:15:04.954 "method": "nvmf_set_config", 00:15:04.954 "params": { 00:15:04.954 "discovery_filter": "match_any", 00:15:04.954 "admin_cmd_passthru": { 00:15:04.954 "identify_ctrlr": false 00:15:04.954 } 00:15:04.954 } 00:15:04.954 }, 00:15:04.954 { 00:15:04.954 "method": "nvmf_set_max_subsystems", 00:15:04.954 "params": { 00:15:04.954 "max_subsystems": 1024 00:15:04.954 } 00:15:04.954 }, 00:15:04.954 { 00:15:04.954 "method": "nvmf_set_crdt", 00:15:04.954 "params": { 00:15:04.954 "crdt1": 0, 00:15:04.954 "crdt2": 0, 00:15:04.954 "crdt3": 0 00:15:04.954 } 00:15:04.954 } 00:15:04.954 ] 00:15:04.954 }, 00:15:04.954 { 00:15:04.954 "subsystem": "iscsi", 00:15:04.954 "config": [ 00:15:04.954 { 00:15:04.954 "method": "iscsi_set_options", 00:15:04.954 "params": { 00:15:04.954 "node_base": "iqn.2016-06.io.spdk", 00:15:04.954 "max_sessions": 128, 00:15:04.954 "max_connections_per_session": 2, 00:15:04.954 "max_queue_depth": 64, 00:15:04.954 "default_time2wait": 2, 00:15:04.954 "default_time2retain": 20, 00:15:04.954 "first_burst_length": 8192, 00:15:04.954 "immediate_data": true, 00:15:04.954 "allow_duplicated_isid": false, 00:15:04.954 "error_recovery_level": 0, 00:15:04.954 "nop_timeout": 60, 00:15:04.954 "nop_in_interval": 30, 00:15:04.954 "disable_chap": false, 00:15:04.954 "require_chap": false, 00:15:04.954 "mutual_chap": false, 00:15:04.954 "chap_group": 0, 00:15:04.954 "max_large_datain_per_connection": 64, 00:15:04.954 "max_r2t_per_connection": 4, 00:15:04.954 "pdu_pool_size": 36864, 00:15:04.954 "immediate_data_pool_size": 16384, 00:15:04.954 "data_out_pool_size": 2048 00:15:04.954 } 00:15:04.954 } 00:15:04.954 ] 00:15:04.954 } 00:15:04.954 ] 00:15:04.954 }' 00:15:04.954 [2024-05-12 04:55:11.909765] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:04.954 [2024-05-12 04:55:11.909925] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70406 ] 00:15:05.213 [2024-05-12 04:55:12.080455] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.213 [2024-05-12 04:55:12.272028] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:05.213 [2024-05-12 04:55:12.272292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.147 [2024-05-12 04:55:13.087140] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:06.147 [2024-05-12 04:55:13.094422] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:06.147 [2024-05-12 04:55:13.094527] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:06.147 [2024-05-12 04:55:13.094542] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:06.147 [2024-05-12 04:55:13.094550] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:06.147 [2024-05-12 04:55:13.103324] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:06.147 [2024-05-12 04:55:13.103350] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:06.147 [2024-05-12 04:55:13.110339] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:06.147 [2024-05-12 04:55:13.110448] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:06.147 [2024-05-12 04:55:13.127311] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:06.713 04:55:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:06.713 04:55:13 -- common/autotest_common.sh@852 -- # return 0 00:15:06.713 04:55:13 -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:15:06.713 04:55:13 -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:15:06.713 04:55:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:06.713 04:55:13 -- common/autotest_common.sh@10 -- # set +x 00:15:06.713 04:55:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:06.714 04:55:13 -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:06.714 04:55:13 -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:15:06.714 04:55:13 -- ublk/ublk.sh@125 -- # killprocess 70406 00:15:06.714 04:55:13 -- common/autotest_common.sh@926 -- # '[' -z 70406 ']' 00:15:06.714 04:55:13 -- common/autotest_common.sh@930 -- # kill -0 70406 00:15:06.714 04:55:13 -- common/autotest_common.sh@931 -- # uname 00:15:06.714 04:55:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:06.714 04:55:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70406 00:15:06.714 killing process with pid 70406 00:15:06.714 04:55:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:06.714 04:55:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:06.714 04:55:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70406' 00:15:06.714 04:55:13 -- common/autotest_common.sh@945 -- # kill 70406 00:15:06.714 04:55:13 -- common/autotest_common.sh@950 -- # wait 70406 00:15:08.088 [2024-05-12 04:55:15.139500] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:08.088 [2024-05-12 04:55:15.172303] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:08.088 [2024-05-12 04:55:15.172523] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:08.088 [2024-05-12 04:55:15.180255] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:08.088 [2024-05-12 04:55:15.180309] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:08.088 [2024-05-12 04:55:15.180321] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:08.088 [2024-05-12 04:55:15.180362] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:15:08.088 [2024-05-12 04:55:15.180564] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:15:09.465 04:55:16 -- ublk/ublk.sh@126 -- # trap - EXIT 00:15:09.465 00:15:09.465 real 0m9.403s 00:15:09.465 user 0m7.885s 00:15:09.465 sys 0m2.809s 00:15:09.465 04:55:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:09.465 ************************************ 00:15:09.465 END TEST test_save_ublk_config 00:15:09.465 ************************************ 00:15:09.465 04:55:16 -- common/autotest_common.sh@10 -- # set +x 00:15:09.465 04:55:16 -- ublk/ublk.sh@139 -- # spdk_pid=70486 00:15:09.466 04:55:16 -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:09.466 04:55:16 -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:09.466 04:55:16 -- ublk/ublk.sh@141 -- # waitforlisten 70486 00:15:09.466 04:55:16 -- common/autotest_common.sh@819 -- # '[' -z 70486 ']' 00:15:09.466 04:55:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.466 04:55:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:09.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.466 04:55:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.466 04:55:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:09.466 04:55:16 -- common/autotest_common.sh@10 -- # set +x 00:15:09.466 [2024-05-12 04:55:16.439175] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:09.466 [2024-05-12 04:55:16.439341] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70486 ] 00:15:09.725 [2024-05-12 04:55:16.599267] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:09.725 [2024-05-12 04:55:16.773560] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:09.725 [2024-05-12 04:55:16.773947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:09.725 [2024-05-12 04:55:16.773996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.102 04:55:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:11.102 04:55:18 -- common/autotest_common.sh@852 -- # return 0 00:15:11.102 04:55:18 -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:15:11.102 04:55:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:11.102 04:55:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:11.102 04:55:18 -- common/autotest_common.sh@10 -- # set +x 00:15:11.102 ************************************ 00:15:11.102 START TEST test_create_ublk 00:15:11.102 ************************************ 00:15:11.102 04:55:18 -- common/autotest_common.sh@1104 -- # test_create_ublk 00:15:11.102 04:55:18 -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:15:11.102 04:55:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:11.102 04:55:18 -- common/autotest_common.sh@10 -- # set +x 00:15:11.102 [2024-05-12 04:55:18.176626] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:11.102 04:55:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:11.102 04:55:18 -- ublk/ublk.sh@33 -- # ublk_target= 00:15:11.102 04:55:18 -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:15:11.102 04:55:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:11.102 04:55:18 -- common/autotest_common.sh@10 -- # set +x 00:15:11.361 04:55:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:11.361 04:55:18 -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:15:11.361 04:55:18 -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:15:11.361 04:55:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:11.361 04:55:18 -- common/autotest_common.sh@10 -- # set +x 00:15:11.361 [2024-05-12 04:55:18.406466] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:15:11.361 [2024-05-12 04:55:18.407039] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:15:11.361 [2024-05-12 04:55:18.407074] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:11.361 [2024-05-12 04:55:18.407089] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:11.361 [2024-05-12 04:55:18.418592] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:11.361 [2024-05-12 04:55:18.418627] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:11.361 [2024-05-12 04:55:18.425288] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:11.361 [2024-05-12 04:55:18.444519] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:11.361 [2024-05-12 04:55:18.459486] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:11.361 04:55:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:11.361 04:55:18 -- ublk/ublk.sh@37 -- # ublk_id=0 00:15:11.361 04:55:18 -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:15:11.361 04:55:18 -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:15:11.361 04:55:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:11.361 04:55:18 -- common/autotest_common.sh@10 -- # set +x 00:15:11.361 04:55:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:11.361 04:55:18 -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:15:11.361 { 00:15:11.361 "ublk_device": "/dev/ublkb0", 00:15:11.361 "id": 0, 00:15:11.361 "queue_depth": 512, 00:15:11.361 "num_queues": 4, 00:15:11.361 "bdev_name": "Malloc0" 00:15:11.361 } 00:15:11.361 ]' 00:15:11.361 04:55:18 -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:15:11.620 04:55:18 -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:11.620 04:55:18 -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:15:11.620 04:55:18 -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:15:11.620 04:55:18 -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:15:11.620 04:55:18 -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:15:11.620 04:55:18 -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:15:11.620 04:55:18 -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:15:11.620 04:55:18 -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:15:11.620 04:55:18 -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:15:11.620 04:55:18 -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:15:11.620 04:55:18 -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:15:11.620 04:55:18 -- lvol/common.sh@41 -- # local offset=0 00:15:11.620 04:55:18 -- lvol/common.sh@42 -- # local size=134217728 00:15:11.620 04:55:18 -- lvol/common.sh@43 -- # local rw=write 00:15:11.620 04:55:18 -- lvol/common.sh@44 -- # local pattern=0xcc 00:15:11.620 04:55:18 -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:15:11.620 04:55:18 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:15:11.620 04:55:18 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:15:11.620 04:55:18 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:11.620 04:55:18 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:11.620 04:55:18 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:15:11.879 fio: verification read phase will never start because write phase uses all of runtime 00:15:11.879 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:15:11.879 fio-3.35 00:15:11.879 Starting 1 process 00:15:21.907 00:15:21.907 fio_test: (groupid=0, jobs=1): err= 0: pid=70547: Sun May 12 04:55:28 2024 00:15:21.907 write: IOPS=11.2k, BW=43.6MiB/s (45.8MB/s)(436MiB/10001msec); 0 zone resets 00:15:21.907 clat (usec): min=62, max=10965, avg=88.14, stdev=159.52 00:15:21.907 lat (usec): min=63, max=10982, avg=88.87, stdev=159.54 00:15:21.907 clat percentiles (usec): 00:15:21.907 | 1.00th=[ 68], 5.00th=[ 69], 10.00th=[ 70], 20.00th=[ 71], 00:15:21.907 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 74], 60.00th=[ 77], 00:15:21.907 | 70.00th=[ 83], 80.00th=[ 89], 90.00th=[ 99], 95.00th=[ 108], 00:15:21.907 | 99.00th=[ 131], 99.50th=[ 149], 99.90th=[ 3228], 99.95th=[ 3752], 00:15:21.907 | 99.99th=[ 4080] 00:15:21.907 bw ( KiB/s): min=18864, max=47584, per=99.77%, avg=44586.53, stdev=6350.56, samples=19 00:15:21.907 iops : min= 4716, max=11896, avg=11146.63, stdev=1587.64, samples=19 00:15:21.907 lat (usec) : 100=90.91%, 250=8.69%, 500=0.01%, 750=0.02%, 1000=0.03% 00:15:21.907 lat (msec) : 2=0.12%, 4=0.21%, 10=0.02%, 20=0.01% 00:15:21.907 cpu : usr=3.09%, sys=8.12%, ctx=111734, majf=0, minf=796 00:15:21.907 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:21.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:21.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:21.907 issued rwts: total=0,111731,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:21.907 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:21.907 00:15:21.907 Run status group 0 (all jobs): 00:15:21.907 WRITE: bw=43.6MiB/s (45.8MB/s), 43.6MiB/s-43.6MiB/s (45.8MB/s-45.8MB/s), io=436MiB (458MB), run=10001-10001msec 00:15:21.907 00:15:21.907 Disk stats (read/write): 00:15:21.907 ublkb0: ios=0/110554, merge=0/0, ticks=0/8851, in_queue=8851, util=99.10% 00:15:21.907 04:55:28 -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:15:21.907 04:55:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:21.907 04:55:28 -- common/autotest_common.sh@10 -- # set +x 00:15:21.907 [2024-05-12 04:55:28.979198] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:21.907 [2024-05-12 04:55:29.025786] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:21.907 [2024-05-12 04:55:29.030564] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:22.166 [2024-05-12 04:55:29.037344] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:22.166 [2024-05-12 04:55:29.037733] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:22.166 [2024-05-12 04:55:29.037749] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:22.166 04:55:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:22.166 04:55:29 -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:15:22.166 04:55:29 -- common/autotest_common.sh@640 -- # local es=0 00:15:22.166 04:55:29 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:15:22.166 04:55:29 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:15:22.166 04:55:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:22.166 04:55:29 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:15:22.166 04:55:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:22.166 04:55:29 -- common/autotest_common.sh@643 -- # rpc_cmd ublk_stop_disk 0 00:15:22.166 04:55:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:22.166 04:55:29 -- common/autotest_common.sh@10 -- # set +x 00:15:22.166 [2024-05-12 04:55:29.045523] ublk.c:1049:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:15:22.166 request: 00:15:22.166 { 00:15:22.166 "ublk_id": 0, 00:15:22.166 "method": "ublk_stop_disk", 00:15:22.166 "req_id": 1 00:15:22.166 } 00:15:22.166 Got JSON-RPC error response 00:15:22.166 response: 00:15:22.166 { 00:15:22.166 "code": -19, 00:15:22.166 "message": "No such device" 00:15:22.166 } 00:15:22.166 04:55:29 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:15:22.166 04:55:29 -- common/autotest_common.sh@643 -- # es=1 00:15:22.166 04:55:29 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:22.166 04:55:29 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:22.166 04:55:29 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:22.166 04:55:29 -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:15:22.166 04:55:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:22.166 04:55:29 -- common/autotest_common.sh@10 -- # set +x 00:15:22.166 [2024-05-12 04:55:29.061358] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:15:22.166 [2024-05-12 04:55:29.068329] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:15:22.166 [2024-05-12 04:55:29.068409] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:22.166 04:55:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:22.166 04:55:29 -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:22.166 04:55:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:22.166 04:55:29 -- common/autotest_common.sh@10 -- # set +x 00:15:22.425 04:55:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:22.425 04:55:29 -- ublk/ublk.sh@57 -- # check_leftover_devices 00:15:22.425 04:55:29 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:22.425 04:55:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:22.425 04:55:29 -- common/autotest_common.sh@10 -- # set +x 00:15:22.425 04:55:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:22.425 04:55:29 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:15:22.425 04:55:29 -- lvol/common.sh@26 -- # jq length 00:15:22.425 04:55:29 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:15:22.425 04:55:29 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:15:22.425 04:55:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:22.425 04:55:29 -- common/autotest_common.sh@10 -- # set +x 00:15:22.425 04:55:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:22.425 04:55:29 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:15:22.425 04:55:29 -- lvol/common.sh@28 -- # jq length 00:15:22.425 ************************************ 00:15:22.425 END TEST test_create_ublk 00:15:22.425 ************************************ 00:15:22.425 04:55:29 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:15:22.425 00:15:22.425 real 0m11.305s 00:15:22.425 user 0m0.744s 00:15:22.425 sys 0m0.908s 00:15:22.425 04:55:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:22.425 04:55:29 -- common/autotest_common.sh@10 -- # set +x 00:15:22.425 04:55:29 -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:15:22.425 04:55:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:22.425 04:55:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:22.425 04:55:29 -- common/autotest_common.sh@10 -- # set +x 00:15:22.425 ************************************ 00:15:22.425 START TEST test_create_multi_ublk 00:15:22.425 ************************************ 00:15:22.425 04:55:29 -- common/autotest_common.sh@1104 -- # test_create_multi_ublk 00:15:22.425 04:55:29 -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:15:22.425 04:55:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:22.425 04:55:29 -- common/autotest_common.sh@10 -- # set +x 00:15:22.425 [2024-05-12 04:55:29.531358] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:22.425 04:55:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:22.425 04:55:29 -- ublk/ublk.sh@62 -- # ublk_target= 00:15:22.425 04:55:29 -- ublk/ublk.sh@64 -- # seq 0 3 00:15:22.425 04:55:29 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:22.425 04:55:29 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:15:22.425 04:55:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:22.425 04:55:29 -- common/autotest_common.sh@10 -- # set +x 00:15:22.683 04:55:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:22.683 04:55:29 -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:15:22.683 04:55:29 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:15:22.683 04:55:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:22.683 04:55:29 -- common/autotest_common.sh@10 -- # set +x 00:15:22.683 [2024-05-12 04:55:29.743507] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:15:22.683 [2024-05-12 04:55:29.744087] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:15:22.683 [2024-05-12 04:55:29.744103] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:22.683 [2024-05-12 04:55:29.744115] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:22.683 [2024-05-12 04:55:29.751739] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:22.683 [2024-05-12 04:55:29.751774] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:22.683 [2024-05-12 04:55:29.758307] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:22.683 [2024-05-12 04:55:29.759030] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:22.683 [2024-05-12 04:55:29.772403] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:22.683 04:55:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:22.683 04:55:29 -- ublk/ublk.sh@68 -- # ublk_id=0 00:15:22.683 04:55:29 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:22.683 04:55:29 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:15:22.683 04:55:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:22.683 04:55:29 -- common/autotest_common.sh@10 -- # set +x 00:15:22.942 04:55:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:22.942 04:55:29 -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:15:22.942 04:55:29 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:15:22.942 04:55:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:22.942 04:55:29 -- common/autotest_common.sh@10 -- # set +x 00:15:22.942 [2024-05-12 04:55:30.002544] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:15:22.942 [2024-05-12 04:55:30.003176] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:15:22.942 [2024-05-12 04:55:30.003264] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:22.942 [2024-05-12 04:55:30.003281] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:15:22.942 [2024-05-12 04:55:30.010288] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:22.942 [2024-05-12 04:55:30.010321] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:22.942 [2024-05-12 04:55:30.017353] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:22.942 [2024-05-12 04:55:30.018338] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:15:22.942 [2024-05-12 04:55:30.025404] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:15:22.942 04:55:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:22.942 04:55:30 -- ublk/ublk.sh@68 -- # ublk_id=1 00:15:22.942 04:55:30 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:22.942 04:55:30 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:15:22.942 04:55:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:22.942 04:55:30 -- common/autotest_common.sh@10 -- # set +x 00:15:23.201 04:55:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:23.201 04:55:30 -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:15:23.201 04:55:30 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:15:23.201 04:55:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:23.201 04:55:30 -- common/autotest_common.sh@10 -- # set +x 00:15:23.201 [2024-05-12 04:55:30.270490] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:15:23.201 [2024-05-12 04:55:30.270998] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:15:23.201 [2024-05-12 04:55:30.271012] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:15:23.201 [2024-05-12 04:55:30.271025] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:15:23.201 [2024-05-12 04:55:30.278315] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:23.201 [2024-05-12 04:55:30.278344] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:23.201 [2024-05-12 04:55:30.292326] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:23.201 [2024-05-12 04:55:30.293051] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:15:23.201 [2024-05-12 04:55:30.309328] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:15:23.201 04:55:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:23.201 04:55:30 -- ublk/ublk.sh@68 -- # ublk_id=2 00:15:23.201 04:55:30 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:23.201 04:55:30 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:15:23.201 04:55:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:23.201 04:55:30 -- common/autotest_common.sh@10 -- # set +x 00:15:23.460 04:55:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:23.460 04:55:30 -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:15:23.460 04:55:30 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:15:23.460 04:55:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:23.460 04:55:30 -- common/autotest_common.sh@10 -- # set +x 00:15:23.460 [2024-05-12 04:55:30.552469] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:15:23.460 [2024-05-12 04:55:30.553009] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:15:23.460 [2024-05-12 04:55:30.553036] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:15:23.460 [2024-05-12 04:55:30.553048] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:15:23.460 [2024-05-12 04:55:30.560576] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:23.460 [2024-05-12 04:55:30.560617] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:23.460 [2024-05-12 04:55:30.569384] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:23.460 [2024-05-12 04:55:30.570209] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:15:23.460 [2024-05-12 04:55:30.574437] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:15:23.460 04:55:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:23.460 04:55:30 -- ublk/ublk.sh@68 -- # ublk_id=3 00:15:23.460 04:55:30 -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:15:23.460 04:55:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:23.460 04:55:30 -- common/autotest_common.sh@10 -- # set +x 00:15:23.718 04:55:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:23.718 04:55:30 -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:15:23.718 { 00:15:23.718 "ublk_device": "/dev/ublkb0", 00:15:23.718 "id": 0, 00:15:23.718 "queue_depth": 512, 00:15:23.718 "num_queues": 4, 00:15:23.718 "bdev_name": "Malloc0" 00:15:23.718 }, 00:15:23.718 { 00:15:23.718 "ublk_device": "/dev/ublkb1", 00:15:23.718 "id": 1, 00:15:23.718 "queue_depth": 512, 00:15:23.718 "num_queues": 4, 00:15:23.718 "bdev_name": "Malloc1" 00:15:23.718 }, 00:15:23.718 { 00:15:23.718 "ublk_device": "/dev/ublkb2", 00:15:23.718 "id": 2, 00:15:23.718 "queue_depth": 512, 00:15:23.718 "num_queues": 4, 00:15:23.718 "bdev_name": "Malloc2" 00:15:23.718 }, 00:15:23.718 { 00:15:23.718 "ublk_device": "/dev/ublkb3", 00:15:23.718 "id": 3, 00:15:23.718 "queue_depth": 512, 00:15:23.718 "num_queues": 4, 00:15:23.718 "bdev_name": "Malloc3" 00:15:23.718 } 00:15:23.718 ]' 00:15:23.718 04:55:30 -- ublk/ublk.sh@72 -- # seq 0 3 00:15:23.718 04:55:30 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:23.718 04:55:30 -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:15:23.718 04:55:30 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:23.718 04:55:30 -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:15:23.718 04:55:30 -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:15:23.718 04:55:30 -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:15:23.718 04:55:30 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:23.718 04:55:30 -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:15:23.718 04:55:30 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:23.718 04:55:30 -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:15:23.977 04:55:30 -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:15:23.977 04:55:30 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:23.977 04:55:30 -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:15:23.977 04:55:30 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:15:23.977 04:55:30 -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:15:23.977 04:55:30 -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:15:23.977 04:55:30 -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:15:23.977 04:55:31 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:23.977 04:55:31 -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:15:23.977 04:55:31 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:23.977 04:55:31 -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:15:24.235 04:55:31 -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:15:24.235 04:55:31 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:24.235 04:55:31 -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:15:24.235 04:55:31 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:15:24.235 04:55:31 -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:15:24.235 04:55:31 -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:15:24.235 04:55:31 -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:15:24.235 04:55:31 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:24.235 04:55:31 -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:15:24.494 04:55:31 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:24.494 04:55:31 -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:15:24.494 04:55:31 -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:15:24.494 04:55:31 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:24.494 04:55:31 -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:15:24.494 04:55:31 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:15:24.494 04:55:31 -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:15:24.494 04:55:31 -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:15:24.494 04:55:31 -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:15:24.494 04:55:31 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:24.494 04:55:31 -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:15:24.752 04:55:31 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:24.752 04:55:31 -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:15:24.752 04:55:31 -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:15:24.752 04:55:31 -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:15:24.752 04:55:31 -- ublk/ublk.sh@85 -- # seq 0 3 00:15:24.752 04:55:31 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:24.752 04:55:31 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:15:24.752 04:55:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:24.752 04:55:31 -- common/autotest_common.sh@10 -- # set +x 00:15:24.752 [2024-05-12 04:55:31.700700] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:24.752 [2024-05-12 04:55:31.731680] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:24.752 [2024-05-12 04:55:31.736625] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:24.752 [2024-05-12 04:55:31.744346] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:24.752 [2024-05-12 04:55:31.744718] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:24.752 [2024-05-12 04:55:31.744743] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:24.752 04:55:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:24.752 04:55:31 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:24.752 04:55:31 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:15:24.752 04:55:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:24.752 04:55:31 -- common/autotest_common.sh@10 -- # set +x 00:15:24.752 [2024-05-12 04:55:31.754441] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:24.752 [2024-05-12 04:55:31.794309] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:24.752 [2024-05-12 04:55:31.799553] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:24.752 [2024-05-12 04:55:31.808331] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:24.752 [2024-05-12 04:55:31.808744] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:24.752 [2024-05-12 04:55:31.808770] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:24.752 04:55:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:24.752 04:55:31 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:24.752 04:55:31 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:15:24.752 04:55:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:24.752 04:55:31 -- common/autotest_common.sh@10 -- # set +x 00:15:24.752 [2024-05-12 04:55:31.812409] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:15:24.752 [2024-05-12 04:55:31.858315] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:24.752 [2024-05-12 04:55:31.859477] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:15:24.752 [2024-05-12 04:55:31.868470] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:24.752 [2024-05-12 04:55:31.868874] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:15:24.752 [2024-05-12 04:55:31.868911] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:15:24.752 04:55:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:24.752 04:55:31 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:24.752 04:55:31 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:15:24.752 04:55:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:24.752 04:55:31 -- common/autotest_common.sh@10 -- # set +x 00:15:24.752 [2024-05-12 04:55:31.872572] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:15:25.010 [2024-05-12 04:55:31.913297] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:25.010 [2024-05-12 04:55:31.914515] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:15:25.010 [2024-05-12 04:55:31.921434] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:25.010 [2024-05-12 04:55:31.921792] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:15:25.010 [2024-05-12 04:55:31.921815] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:15:25.010 04:55:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:25.010 04:55:31 -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:15:25.272 [2024-05-12 04:55:32.178355] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:15:25.272 [2024-05-12 04:55:32.186238] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:15:25.272 [2024-05-12 04:55:32.186293] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:25.272 04:55:32 -- ublk/ublk.sh@93 -- # seq 0 3 00:15:25.272 04:55:32 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:25.272 04:55:32 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:25.272 04:55:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:25.272 04:55:32 -- common/autotest_common.sh@10 -- # set +x 00:15:25.531 04:55:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:25.531 04:55:32 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:25.531 04:55:32 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:15:25.531 04:55:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:25.531 04:55:32 -- common/autotest_common.sh@10 -- # set +x 00:15:25.790 04:55:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:25.790 04:55:32 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:25.790 04:55:32 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:15:25.790 04:55:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:25.790 04:55:32 -- common/autotest_common.sh@10 -- # set +x 00:15:26.048 04:55:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:26.048 04:55:33 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:26.048 04:55:33 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:15:26.048 04:55:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:26.048 04:55:33 -- common/autotest_common.sh@10 -- # set +x 00:15:26.305 04:55:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:26.305 04:55:33 -- ublk/ublk.sh@96 -- # check_leftover_devices 00:15:26.305 04:55:33 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:26.305 04:55:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:26.305 04:55:33 -- common/autotest_common.sh@10 -- # set +x 00:15:26.305 04:55:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:26.305 04:55:33 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:15:26.305 04:55:33 -- lvol/common.sh@26 -- # jq length 00:15:26.305 04:55:33 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:15:26.305 04:55:33 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:15:26.305 04:55:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:26.305 04:55:33 -- common/autotest_common.sh@10 -- # set +x 00:15:26.305 04:55:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:26.305 04:55:33 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:15:26.305 04:55:33 -- lvol/common.sh@28 -- # jq length 00:15:26.563 04:55:33 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:15:26.563 00:15:26.563 real 0m3.921s 00:15:26.563 user 0m1.360s 00:15:26.563 sys 0m0.165s 00:15:26.563 04:55:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:26.563 04:55:33 -- common/autotest_common.sh@10 -- # set +x 00:15:26.563 ************************************ 00:15:26.563 END TEST test_create_multi_ublk 00:15:26.563 ************************************ 00:15:26.563 04:55:33 -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:15:26.563 04:55:33 -- ublk/ublk.sh@147 -- # cleanup 00:15:26.563 04:55:33 -- ublk/ublk.sh@130 -- # killprocess 70486 00:15:26.563 04:55:33 -- common/autotest_common.sh@926 -- # '[' -z 70486 ']' 00:15:26.563 04:55:33 -- common/autotest_common.sh@930 -- # kill -0 70486 00:15:26.563 04:55:33 -- common/autotest_common.sh@931 -- # uname 00:15:26.563 04:55:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:26.563 04:55:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70486 00:15:26.564 04:55:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:26.564 04:55:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:26.564 killing process with pid 70486 00:15:26.564 04:55:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70486' 00:15:26.564 04:55:33 -- common/autotest_common.sh@945 -- # kill 70486 00:15:26.564 04:55:33 -- common/autotest_common.sh@950 -- # wait 70486 00:15:27.498 [2024-05-12 04:55:34.367459] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:15:27.498 [2024-05-12 04:55:34.367532] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:15:28.434 00:15:28.434 real 0m28.560s 00:15:28.434 user 0m43.046s 00:15:28.434 sys 0m8.791s 00:15:28.434 04:55:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:28.434 ************************************ 00:15:28.434 END TEST ublk 00:15:28.434 ************************************ 00:15:28.434 04:55:35 -- common/autotest_common.sh@10 -- # set +x 00:15:28.434 04:55:35 -- spdk/autotest.sh@260 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:15:28.434 04:55:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:28.434 04:55:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:28.434 04:55:35 -- common/autotest_common.sh@10 -- # set +x 00:15:28.434 ************************************ 00:15:28.434 START TEST ublk_recovery 00:15:28.434 ************************************ 00:15:28.434 04:55:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:15:28.434 * Looking for test storage... 00:15:28.434 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:28.434 04:55:35 -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:28.434 04:55:35 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:28.434 04:55:35 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:28.434 04:55:35 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:28.434 04:55:35 -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:28.434 04:55:35 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:28.434 04:55:35 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:28.434 04:55:35 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:28.434 04:55:35 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:28.434 04:55:35 -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:15:28.434 04:55:35 -- ublk/ublk_recovery.sh@19 -- # spdk_pid=70878 00:15:28.434 04:55:35 -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:28.434 04:55:35 -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:28.434 04:55:35 -- ublk/ublk_recovery.sh@21 -- # waitforlisten 70878 00:15:28.434 04:55:35 -- common/autotest_common.sh@819 -- # '[' -z 70878 ']' 00:15:28.434 04:55:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.434 04:55:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:28.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.434 04:55:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.434 04:55:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:28.434 04:55:35 -- common/autotest_common.sh@10 -- # set +x 00:15:28.693 [2024-05-12 04:55:35.603631] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:28.693 [2024-05-12 04:55:35.603810] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70878 ] 00:15:28.693 [2024-05-12 04:55:35.768704] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:28.951 [2024-05-12 04:55:35.936878] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:28.951 [2024-05-12 04:55:35.937288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:28.951 [2024-05-12 04:55:35.937440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.362 04:55:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:30.362 04:55:37 -- common/autotest_common.sh@852 -- # return 0 00:15:30.362 04:55:37 -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:15:30.362 04:55:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:30.362 04:55:37 -- common/autotest_common.sh@10 -- # set +x 00:15:30.362 [2024-05-12 04:55:37.157468] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:30.362 04:55:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:30.362 04:55:37 -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:15:30.362 04:55:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:30.362 04:55:37 -- common/autotest_common.sh@10 -- # set +x 00:15:30.362 malloc0 00:15:30.362 04:55:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:30.362 04:55:37 -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:15:30.362 04:55:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:30.362 04:55:37 -- common/autotest_common.sh@10 -- # set +x 00:15:30.362 [2024-05-12 04:55:37.285986] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:15:30.362 [2024-05-12 04:55:37.286127] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:15:30.362 [2024-05-12 04:55:37.286142] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:30.362 [2024-05-12 04:55:37.286154] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:15:30.362 [2024-05-12 04:55:37.292499] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:30.362 [2024-05-12 04:55:37.292528] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:30.362 [2024-05-12 04:55:37.299311] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:30.363 [2024-05-12 04:55:37.299488] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:15:30.363 [2024-05-12 04:55:37.314406] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:15:30.363 1 00:15:30.363 04:55:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:30.363 04:55:37 -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:15:31.298 04:55:38 -- ublk/ublk_recovery.sh@31 -- # fio_proc=70921 00:15:31.298 04:55:38 -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:15:31.298 04:55:38 -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:15:31.556 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:31.556 fio-3.35 00:15:31.556 Starting 1 process 00:15:36.823 04:55:43 -- ublk/ublk_recovery.sh@36 -- # kill -9 70878 00:15:36.823 04:55:43 -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:15:42.104 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 70878 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:15:42.104 04:55:48 -- ublk/ublk_recovery.sh@42 -- # spdk_pid=71025 00:15:42.104 04:55:48 -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:42.104 04:55:48 -- ublk/ublk_recovery.sh@44 -- # waitforlisten 71025 00:15:42.104 04:55:48 -- common/autotest_common.sh@819 -- # '[' -z 71025 ']' 00:15:42.104 04:55:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.104 04:55:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:42.104 04:55:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.104 04:55:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:42.104 04:55:48 -- common/autotest_common.sh@10 -- # set +x 00:15:42.104 04:55:48 -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:42.104 [2024-05-12 04:55:48.427815] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:42.104 [2024-05-12 04:55:48.427975] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71025 ] 00:15:42.104 [2024-05-12 04:55:48.587989] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:42.104 [2024-05-12 04:55:48.814881] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:42.104 [2024-05-12 04:55:48.815307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:42.104 [2024-05-12 04:55:48.815325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.042 04:55:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:43.042 04:55:50 -- common/autotest_common.sh@852 -- # return 0 00:15:43.042 04:55:50 -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:15:43.042 04:55:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:43.042 04:55:50 -- common/autotest_common.sh@10 -- # set +x 00:15:43.042 [2024-05-12 04:55:50.092748] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:43.042 04:55:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:43.042 04:55:50 -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:15:43.042 04:55:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:43.042 04:55:50 -- common/autotest_common.sh@10 -- # set +x 00:15:43.301 malloc0 00:15:43.301 04:55:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:43.301 04:55:50 -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:15:43.301 04:55:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:43.301 04:55:50 -- common/autotest_common.sh@10 -- # set +x 00:15:43.301 [2024-05-12 04:55:50.214509] ublk.c:2073:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:15:43.301 [2024-05-12 04:55:50.214574] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:43.301 [2024-05-12 04:55:50.214588] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:15:43.301 [2024-05-12 04:55:50.221429] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:15:43.301 [2024-05-12 04:55:50.221456] ublk.c:2002:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:15:43.301 [2024-05-12 04:55:50.221568] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:15:43.301 1 00:15:43.301 04:55:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:43.301 04:55:50 -- ublk/ublk_recovery.sh@52 -- # wait 70921 00:16:09.875 [2024-05-12 04:56:13.796306] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:16:09.875 [2024-05-12 04:56:13.800171] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:16:09.875 [2024-05-12 04:56:13.804600] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:16:09.875 [2024-05-12 04:56:13.804653] ublk.c: 377:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:16:31.812 00:16:31.812 fio_test: (groupid=0, jobs=1): err= 0: pid=70928: Sun May 12 04:56:38 2024 00:16:31.812 read: IOPS=10.8k, BW=42.2MiB/s (44.3MB/s)(2532MiB/60002msec) 00:16:31.812 slat (nsec): min=1751, max=191119, avg=6003.37, stdev=2826.39 00:16:31.812 clat (usec): min=908, max=30487k, avg=6055.20, stdev=312168.74 00:16:31.812 lat (usec): min=926, max=30487k, avg=6061.20, stdev=312168.73 00:16:31.812 clat percentiles (msec): 00:16:31.812 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3], 00:16:31.812 | 30.00th=[ 3], 40.00th=[ 3], 50.00th=[ 3], 60.00th=[ 3], 00:16:31.812 | 70.00th=[ 3], 80.00th=[ 3], 90.00th=[ 4], 95.00th=[ 4], 00:16:31.812 | 99.00th=[ 6], 99.50th=[ 7], 99.90th=[ 8], 99.95th=[ 9], 00:16:31.812 | 99.99th=[17113] 00:16:31.812 bw ( KiB/s): min=21688, max=92544, per=100.00%, avg=86545.08, stdev=11738.89, samples=59 00:16:31.812 iops : min= 5422, max=23136, avg=21636.27, stdev=2934.73, samples=59 00:16:31.812 write: IOPS=10.8k, BW=42.2MiB/s (44.2MB/s)(2531MiB/60002msec); 0 zone resets 00:16:31.812 slat (usec): min=2, max=195, avg= 6.05, stdev= 2.92 00:16:31.812 clat (usec): min=801, max=30488k, avg=5782.02, stdev=293340.92 00:16:31.812 lat (usec): min=822, max=30488k, avg=5788.07, stdev=293340.92 00:16:31.812 clat percentiles (usec): 00:16:31.812 | 1.00th=[ 2409], 5.00th=[ 2606], 10.00th=[ 2638], 20.00th=[ 2704], 00:16:31.812 | 30.00th=[ 2737], 40.00th=[ 2802], 50.00th=[ 2835], 60.00th=[ 2868], 00:16:31.812 | 70.00th=[ 2933], 80.00th=[ 3032], 90.00th=[ 3163], 95.00th=[ 3752], 00:16:31.812 | 99.00th=[ 5997], 99.50th=[ 6521], 99.90th=[ 7767], 99.95th=[ 8717], 00:16:31.812 | 99.99th=[13829] 00:16:31.812 bw ( KiB/s): min=21360, max=92992, per=100.00%, avg=86478.64, stdev=11682.31, samples=59 00:16:31.812 iops : min= 5340, max=23248, avg=21619.66, stdev=2920.58, samples=59 00:16:31.812 lat (usec) : 1000=0.01% 00:16:31.812 lat (msec) : 2=0.17%, 4=95.38%, 10=4.42%, 20=0.02%, >=2000=0.01% 00:16:31.812 cpu : usr=5.47%, sys=12.22%, ctx=37947, majf=0, minf=13 00:16:31.812 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:16:31.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:31.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:31.812 issued rwts: total=648301,647842,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:31.812 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:31.812 00:16:31.812 Run status group 0 (all jobs): 00:16:31.812 READ: bw=42.2MiB/s (44.3MB/s), 42.2MiB/s-42.2MiB/s (44.3MB/s-44.3MB/s), io=2532MiB (2655MB), run=60002-60002msec 00:16:31.812 WRITE: bw=42.2MiB/s (44.2MB/s), 42.2MiB/s-42.2MiB/s (44.2MB/s-44.2MB/s), io=2531MiB (2654MB), run=60002-60002msec 00:16:31.812 00:16:31.812 Disk stats (read/write): 00:16:31.812 ublkb1: ios=645757/645337, merge=0/0, ticks=3860666/3613245, in_queue=7473911, util=99.91% 00:16:31.812 04:56:38 -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:16:31.812 04:56:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.812 04:56:38 -- common/autotest_common.sh@10 -- # set +x 00:16:31.812 [2024-05-12 04:56:38.576998] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:31.812 [2024-05-12 04:56:38.620356] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:31.812 [2024-05-12 04:56:38.620669] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:31.812 [2024-05-12 04:56:38.628255] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:31.812 [2024-05-12 04:56:38.628384] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:31.812 [2024-05-12 04:56:38.628406] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:31.812 04:56:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.812 04:56:38 -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:16:31.812 04:56:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.812 04:56:38 -- common/autotest_common.sh@10 -- # set +x 00:16:31.812 [2024-05-12 04:56:38.643374] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:16:31.812 [2024-05-12 04:56:38.650326] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:16:31.812 [2024-05-12 04:56:38.650368] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:31.812 04:56:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.812 04:56:38 -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:16:31.812 04:56:38 -- ublk/ublk_recovery.sh@59 -- # cleanup 00:16:31.812 04:56:38 -- ublk/ublk_recovery.sh@14 -- # killprocess 71025 00:16:31.812 04:56:38 -- common/autotest_common.sh@926 -- # '[' -z 71025 ']' 00:16:31.812 04:56:38 -- common/autotest_common.sh@930 -- # kill -0 71025 00:16:31.812 04:56:38 -- common/autotest_common.sh@931 -- # uname 00:16:31.812 04:56:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:31.812 04:56:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71025 00:16:31.812 04:56:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:31.812 killing process with pid 71025 00:16:31.812 04:56:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:31.812 04:56:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71025' 00:16:31.812 04:56:38 -- common/autotest_common.sh@945 -- # kill 71025 00:16:31.812 04:56:38 -- common/autotest_common.sh@950 -- # wait 71025 00:16:32.749 [2024-05-12 04:56:39.555790] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:16:32.749 [2024-05-12 04:56:39.555884] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:16:33.685 ************************************ 00:16:33.685 END TEST ublk_recovery 00:16:33.685 ************************************ 00:16:33.685 00:16:33.685 real 1m5.305s 00:16:33.685 user 1m52.135s 00:16:33.685 sys 0m18.505s 00:16:33.686 04:56:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:33.686 04:56:40 -- common/autotest_common.sh@10 -- # set +x 00:16:33.686 04:56:40 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:16:33.686 04:56:40 -- spdk/autotest.sh@268 -- # timing_exit lib 00:16:33.686 04:56:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:33.686 04:56:40 -- common/autotest_common.sh@10 -- # set +x 00:16:33.686 04:56:40 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:16:33.686 04:56:40 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:16:33.686 04:56:40 -- spdk/autotest.sh@287 -- # '[' 0 -eq 1 ']' 00:16:33.686 04:56:40 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:16:33.686 04:56:40 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:16:33.686 04:56:40 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:16:33.686 04:56:40 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:16:33.686 04:56:40 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:16:33.686 04:56:40 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:16:33.686 04:56:40 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:16:33.686 04:56:40 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:33.686 04:56:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:33.686 04:56:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:33.686 04:56:40 -- common/autotest_common.sh@10 -- # set +x 00:16:33.945 ************************************ 00:16:33.945 START TEST ftl 00:16:33.945 ************************************ 00:16:33.945 04:56:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:33.945 * Looking for test storage... 00:16:33.945 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:33.945 04:56:40 -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:33.945 04:56:40 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:33.945 04:56:40 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:33.945 04:56:40 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:33.945 04:56:40 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:33.945 04:56:40 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:33.945 04:56:40 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:33.945 04:56:40 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:33.945 04:56:40 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:33.945 04:56:40 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:33.945 04:56:40 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:33.945 04:56:40 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:33.945 04:56:40 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:33.945 04:56:40 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:33.945 04:56:40 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:33.945 04:56:40 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:33.945 04:56:40 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:33.945 04:56:40 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:33.945 04:56:40 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:33.945 04:56:40 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:33.945 04:56:40 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:33.945 04:56:40 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:33.945 04:56:40 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:33.945 04:56:40 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:33.945 04:56:40 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:33.945 04:56:40 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:33.945 04:56:40 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:33.945 04:56:40 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:33.945 04:56:40 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:33.945 04:56:40 -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:33.945 04:56:40 -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:16:33.945 04:56:40 -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:16:33.945 04:56:40 -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:16:33.945 04:56:40 -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:16:33.945 04:56:40 -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:34.512 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:34.512 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:34.512 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:34.512 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:34.512 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:34.512 04:56:41 -- ftl/ftl.sh@37 -- # spdk_tgt_pid=71821 00:16:34.512 04:56:41 -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:16:34.512 04:56:41 -- ftl/ftl.sh@38 -- # waitforlisten 71821 00:16:34.512 04:56:41 -- common/autotest_common.sh@819 -- # '[' -z 71821 ']' 00:16:34.512 04:56:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.512 04:56:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:34.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.512 04:56:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.512 04:56:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:34.512 04:56:41 -- common/autotest_common.sh@10 -- # set +x 00:16:34.512 [2024-05-12 04:56:41.558071] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:34.512 [2024-05-12 04:56:41.559078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71821 ] 00:16:34.771 [2024-05-12 04:56:41.734628] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.030 [2024-05-12 04:56:41.956584] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:35.030 [2024-05-12 04:56:41.956868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.596 04:56:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:35.596 04:56:42 -- common/autotest_common.sh@852 -- # return 0 00:16:35.596 04:56:42 -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:16:35.596 04:56:42 -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:36.531 04:56:43 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:16:36.531 04:56:43 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:37.099 04:56:44 -- ftl/ftl.sh@46 -- # cache_size=1310720 00:16:37.099 04:56:44 -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:16:37.099 04:56:44 -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:16:37.358 04:56:44 -- ftl/ftl.sh@47 -- # cache_disks=0000:00:06.0 00:16:37.358 04:56:44 -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:16:37.358 04:56:44 -- ftl/ftl.sh@49 -- # nv_cache=0000:00:06.0 00:16:37.358 04:56:44 -- ftl/ftl.sh@50 -- # break 00:16:37.358 04:56:44 -- ftl/ftl.sh@53 -- # '[' -z 0000:00:06.0 ']' 00:16:37.358 04:56:44 -- ftl/ftl.sh@59 -- # base_size=1310720 00:16:37.358 04:56:44 -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:16:37.359 04:56:44 -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:06.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:16:37.359 04:56:44 -- ftl/ftl.sh@60 -- # base_disks=0000:00:07.0 00:16:37.359 04:56:44 -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:16:37.359 04:56:44 -- ftl/ftl.sh@62 -- # device=0000:00:07.0 00:16:37.359 04:56:44 -- ftl/ftl.sh@63 -- # break 00:16:37.359 04:56:44 -- ftl/ftl.sh@66 -- # killprocess 71821 00:16:37.359 04:56:44 -- common/autotest_common.sh@926 -- # '[' -z 71821 ']' 00:16:37.359 04:56:44 -- common/autotest_common.sh@930 -- # kill -0 71821 00:16:37.359 04:56:44 -- common/autotest_common.sh@931 -- # uname 00:16:37.359 04:56:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:37.359 04:56:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71821 00:16:37.618 04:56:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:37.618 04:56:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:37.618 killing process with pid 71821 00:16:37.618 04:56:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71821' 00:16:37.618 04:56:44 -- common/autotest_common.sh@945 -- # kill 71821 00:16:37.618 04:56:44 -- common/autotest_common.sh@950 -- # wait 71821 00:16:39.523 04:56:46 -- ftl/ftl.sh@68 -- # '[' -z 0000:00:07.0 ']' 00:16:39.523 04:56:46 -- ftl/ftl.sh@73 -- # [[ -z '' ]] 00:16:39.523 04:56:46 -- ftl/ftl.sh@74 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:07.0 0000:00:06.0 basic 00:16:39.523 04:56:46 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:39.523 04:56:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:39.523 04:56:46 -- common/autotest_common.sh@10 -- # set +x 00:16:39.523 ************************************ 00:16:39.523 START TEST ftl_fio_basic 00:16:39.523 ************************************ 00:16:39.523 04:56:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:07.0 0000:00:06.0 basic 00:16:39.523 * Looking for test storage... 00:16:39.523 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:39.523 04:56:46 -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:39.523 04:56:46 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:16:39.523 04:56:46 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:39.523 04:56:46 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:39.523 04:56:46 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:39.523 04:56:46 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:39.523 04:56:46 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:39.523 04:56:46 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:39.523 04:56:46 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:39.523 04:56:46 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:39.523 04:56:46 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:39.523 04:56:46 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:39.523 04:56:46 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:39.523 04:56:46 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:39.523 04:56:46 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:39.523 04:56:46 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:39.523 04:56:46 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:39.523 04:56:46 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:39.523 04:56:46 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:39.523 04:56:46 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:39.523 04:56:46 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:39.523 04:56:46 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:39.524 04:56:46 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:39.524 04:56:46 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:39.524 04:56:46 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:39.524 04:56:46 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:39.524 04:56:46 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:39.524 04:56:46 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:39.524 04:56:46 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:39.524 04:56:46 -- ftl/fio.sh@11 -- # declare -A suite 00:16:39.524 04:56:46 -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:16:39.524 04:56:46 -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:16:39.524 04:56:46 -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:16:39.524 04:56:46 -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:39.524 04:56:46 -- ftl/fio.sh@23 -- # device=0000:00:07.0 00:16:39.524 04:56:46 -- ftl/fio.sh@24 -- # cache_device=0000:00:06.0 00:16:39.524 04:56:46 -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:16:39.524 04:56:46 -- ftl/fio.sh@26 -- # uuid= 00:16:39.524 04:56:46 -- ftl/fio.sh@27 -- # timeout=240 00:16:39.524 04:56:46 -- ftl/fio.sh@29 -- # [[ y != y ]] 00:16:39.524 04:56:46 -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:16:39.524 04:56:46 -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:16:39.524 04:56:46 -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:16:39.524 04:56:46 -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:39.524 04:56:46 -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:39.524 04:56:46 -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:39.524 04:56:46 -- ftl/fio.sh@45 -- # svcpid=71950 00:16:39.524 04:56:46 -- ftl/fio.sh@46 -- # waitforlisten 71950 00:16:39.524 04:56:46 -- common/autotest_common.sh@819 -- # '[' -z 71950 ']' 00:16:39.524 04:56:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:39.524 04:56:46 -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:16:39.524 04:56:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:39.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:39.524 04:56:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:39.524 04:56:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:39.524 04:56:46 -- common/autotest_common.sh@10 -- # set +x 00:16:39.524 [2024-05-12 04:56:46.525664] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:39.524 [2024-05-12 04:56:46.525832] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71950 ] 00:16:39.783 [2024-05-12 04:56:46.692575] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:40.041 [2024-05-12 04:56:46.915381] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:40.041 [2024-05-12 04:56:46.915749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:40.041 [2024-05-12 04:56:46.915904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.041 [2024-05-12 04:56:46.915911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:41.418 04:56:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:41.418 04:56:48 -- common/autotest_common.sh@852 -- # return 0 00:16:41.418 04:56:48 -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:16:41.418 04:56:48 -- ftl/common.sh@54 -- # local name=nvme0 00:16:41.418 04:56:48 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:16:41.418 04:56:48 -- ftl/common.sh@56 -- # local size=103424 00:16:41.418 04:56:48 -- ftl/common.sh@59 -- # local base_bdev 00:16:41.418 04:56:48 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:16:41.418 04:56:48 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:41.418 04:56:48 -- ftl/common.sh@62 -- # local base_size 00:16:41.418 04:56:48 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:41.418 04:56:48 -- common/autotest_common.sh@1357 -- # local bdev_name=nvme0n1 00:16:41.418 04:56:48 -- common/autotest_common.sh@1358 -- # local bdev_info 00:16:41.418 04:56:48 -- common/autotest_common.sh@1359 -- # local bs 00:16:41.418 04:56:48 -- common/autotest_common.sh@1360 -- # local nb 00:16:41.418 04:56:48 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:41.677 04:56:48 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:16:41.677 { 00:16:41.677 "name": "nvme0n1", 00:16:41.677 "aliases": [ 00:16:41.677 "5e555464-c3a9-40cd-a9df-1a22567cfef2" 00:16:41.677 ], 00:16:41.677 "product_name": "NVMe disk", 00:16:41.677 "block_size": 4096, 00:16:41.677 "num_blocks": 1310720, 00:16:41.677 "uuid": "5e555464-c3a9-40cd-a9df-1a22567cfef2", 00:16:41.677 "assigned_rate_limits": { 00:16:41.677 "rw_ios_per_sec": 0, 00:16:41.677 "rw_mbytes_per_sec": 0, 00:16:41.677 "r_mbytes_per_sec": 0, 00:16:41.677 "w_mbytes_per_sec": 0 00:16:41.677 }, 00:16:41.677 "claimed": false, 00:16:41.677 "zoned": false, 00:16:41.677 "supported_io_types": { 00:16:41.677 "read": true, 00:16:41.677 "write": true, 00:16:41.677 "unmap": true, 00:16:41.677 "write_zeroes": true, 00:16:41.677 "flush": true, 00:16:41.677 "reset": true, 00:16:41.677 "compare": true, 00:16:41.677 "compare_and_write": false, 00:16:41.677 "abort": true, 00:16:41.677 "nvme_admin": true, 00:16:41.677 "nvme_io": true 00:16:41.677 }, 00:16:41.677 "driver_specific": { 00:16:41.677 "nvme": [ 00:16:41.677 { 00:16:41.677 "pci_address": "0000:00:07.0", 00:16:41.677 "trid": { 00:16:41.677 "trtype": "PCIe", 00:16:41.677 "traddr": "0000:00:07.0" 00:16:41.677 }, 00:16:41.677 "ctrlr_data": { 00:16:41.677 "cntlid": 0, 00:16:41.677 "vendor_id": "0x1b36", 00:16:41.677 "model_number": "QEMU NVMe Ctrl", 00:16:41.677 "serial_number": "12341", 00:16:41.677 "firmware_revision": "8.0.0", 00:16:41.677 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:41.677 "oacs": { 00:16:41.677 "security": 0, 00:16:41.677 "format": 1, 00:16:41.677 "firmware": 0, 00:16:41.677 "ns_manage": 1 00:16:41.677 }, 00:16:41.677 "multi_ctrlr": false, 00:16:41.677 "ana_reporting": false 00:16:41.677 }, 00:16:41.677 "vs": { 00:16:41.677 "nvme_version": "1.4" 00:16:41.677 }, 00:16:41.677 "ns_data": { 00:16:41.677 "id": 1, 00:16:41.677 "can_share": false 00:16:41.677 } 00:16:41.677 } 00:16:41.677 ], 00:16:41.677 "mp_policy": "active_passive" 00:16:41.677 } 00:16:41.677 } 00:16:41.677 ]' 00:16:41.677 04:56:48 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:16:41.677 04:56:48 -- common/autotest_common.sh@1362 -- # bs=4096 00:16:41.677 04:56:48 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:16:41.677 04:56:48 -- common/autotest_common.sh@1363 -- # nb=1310720 00:16:41.677 04:56:48 -- common/autotest_common.sh@1366 -- # bdev_size=5120 00:16:41.677 04:56:48 -- common/autotest_common.sh@1367 -- # echo 5120 00:16:41.677 04:56:48 -- ftl/common.sh@63 -- # base_size=5120 00:16:41.677 04:56:48 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:41.677 04:56:48 -- ftl/common.sh@67 -- # clear_lvols 00:16:41.677 04:56:48 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:41.677 04:56:48 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:41.936 04:56:48 -- ftl/common.sh@28 -- # stores= 00:16:41.936 04:56:48 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:42.194 04:56:49 -- ftl/common.sh@68 -- # lvs=36701353-e4d0-417b-9082-6b1b951d66c3 00:16:42.194 04:56:49 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 36701353-e4d0-417b-9082-6b1b951d66c3 00:16:42.453 04:56:49 -- ftl/fio.sh@48 -- # split_bdev=a57926ce-1a68-4a28-88f9-9bd6a47a531b 00:16:42.453 04:56:49 -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:06.0 a57926ce-1a68-4a28-88f9-9bd6a47a531b 00:16:42.453 04:56:49 -- ftl/common.sh@35 -- # local name=nvc0 00:16:42.453 04:56:49 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:16:42.453 04:56:49 -- ftl/common.sh@37 -- # local base_bdev=a57926ce-1a68-4a28-88f9-9bd6a47a531b 00:16:42.453 04:56:49 -- ftl/common.sh@38 -- # local cache_size= 00:16:42.453 04:56:49 -- ftl/common.sh@41 -- # get_bdev_size a57926ce-1a68-4a28-88f9-9bd6a47a531b 00:16:42.453 04:56:49 -- common/autotest_common.sh@1357 -- # local bdev_name=a57926ce-1a68-4a28-88f9-9bd6a47a531b 00:16:42.453 04:56:49 -- common/autotest_common.sh@1358 -- # local bdev_info 00:16:42.453 04:56:49 -- common/autotest_common.sh@1359 -- # local bs 00:16:42.453 04:56:49 -- common/autotest_common.sh@1360 -- # local nb 00:16:42.453 04:56:49 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a57926ce-1a68-4a28-88f9-9bd6a47a531b 00:16:42.714 04:56:49 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:16:42.714 { 00:16:42.714 "name": "a57926ce-1a68-4a28-88f9-9bd6a47a531b", 00:16:42.714 "aliases": [ 00:16:42.714 "lvs/nvme0n1p0" 00:16:42.714 ], 00:16:42.714 "product_name": "Logical Volume", 00:16:42.714 "block_size": 4096, 00:16:42.714 "num_blocks": 26476544, 00:16:42.714 "uuid": "a57926ce-1a68-4a28-88f9-9bd6a47a531b", 00:16:42.714 "assigned_rate_limits": { 00:16:42.714 "rw_ios_per_sec": 0, 00:16:42.714 "rw_mbytes_per_sec": 0, 00:16:42.714 "r_mbytes_per_sec": 0, 00:16:42.714 "w_mbytes_per_sec": 0 00:16:42.714 }, 00:16:42.714 "claimed": false, 00:16:42.714 "zoned": false, 00:16:42.714 "supported_io_types": { 00:16:42.714 "read": true, 00:16:42.714 "write": true, 00:16:42.714 "unmap": true, 00:16:42.714 "write_zeroes": true, 00:16:42.714 "flush": false, 00:16:42.714 "reset": true, 00:16:42.714 "compare": false, 00:16:42.714 "compare_and_write": false, 00:16:42.714 "abort": false, 00:16:42.714 "nvme_admin": false, 00:16:42.714 "nvme_io": false 00:16:42.714 }, 00:16:42.714 "driver_specific": { 00:16:42.714 "lvol": { 00:16:42.714 "lvol_store_uuid": "36701353-e4d0-417b-9082-6b1b951d66c3", 00:16:42.714 "base_bdev": "nvme0n1", 00:16:42.714 "thin_provision": true, 00:16:42.714 "snapshot": false, 00:16:42.714 "clone": false, 00:16:42.714 "esnap_clone": false 00:16:42.714 } 00:16:42.714 } 00:16:42.714 } 00:16:42.714 ]' 00:16:42.714 04:56:49 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:16:42.714 04:56:49 -- common/autotest_common.sh@1362 -- # bs=4096 00:16:42.714 04:56:49 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:16:42.714 04:56:49 -- common/autotest_common.sh@1363 -- # nb=26476544 00:16:42.714 04:56:49 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:16:42.714 04:56:49 -- common/autotest_common.sh@1367 -- # echo 103424 00:16:42.714 04:56:49 -- ftl/common.sh@41 -- # local base_size=5171 00:16:42.714 04:56:49 -- ftl/common.sh@44 -- # local nvc_bdev 00:16:42.714 04:56:49 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:16:42.972 04:56:50 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:42.972 04:56:50 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:42.972 04:56:50 -- ftl/common.sh@48 -- # get_bdev_size a57926ce-1a68-4a28-88f9-9bd6a47a531b 00:16:42.972 04:56:50 -- common/autotest_common.sh@1357 -- # local bdev_name=a57926ce-1a68-4a28-88f9-9bd6a47a531b 00:16:42.972 04:56:50 -- common/autotest_common.sh@1358 -- # local bdev_info 00:16:42.972 04:56:50 -- common/autotest_common.sh@1359 -- # local bs 00:16:42.972 04:56:50 -- common/autotest_common.sh@1360 -- # local nb 00:16:42.972 04:56:50 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a57926ce-1a68-4a28-88f9-9bd6a47a531b 00:16:43.230 04:56:50 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:16:43.230 { 00:16:43.230 "name": "a57926ce-1a68-4a28-88f9-9bd6a47a531b", 00:16:43.230 "aliases": [ 00:16:43.230 "lvs/nvme0n1p0" 00:16:43.230 ], 00:16:43.230 "product_name": "Logical Volume", 00:16:43.230 "block_size": 4096, 00:16:43.230 "num_blocks": 26476544, 00:16:43.230 "uuid": "a57926ce-1a68-4a28-88f9-9bd6a47a531b", 00:16:43.230 "assigned_rate_limits": { 00:16:43.230 "rw_ios_per_sec": 0, 00:16:43.230 "rw_mbytes_per_sec": 0, 00:16:43.230 "r_mbytes_per_sec": 0, 00:16:43.230 "w_mbytes_per_sec": 0 00:16:43.230 }, 00:16:43.230 "claimed": false, 00:16:43.230 "zoned": false, 00:16:43.230 "supported_io_types": { 00:16:43.230 "read": true, 00:16:43.230 "write": true, 00:16:43.230 "unmap": true, 00:16:43.230 "write_zeroes": true, 00:16:43.230 "flush": false, 00:16:43.230 "reset": true, 00:16:43.230 "compare": false, 00:16:43.230 "compare_and_write": false, 00:16:43.230 "abort": false, 00:16:43.230 "nvme_admin": false, 00:16:43.230 "nvme_io": false 00:16:43.230 }, 00:16:43.230 "driver_specific": { 00:16:43.230 "lvol": { 00:16:43.230 "lvol_store_uuid": "36701353-e4d0-417b-9082-6b1b951d66c3", 00:16:43.230 "base_bdev": "nvme0n1", 00:16:43.230 "thin_provision": true, 00:16:43.230 "snapshot": false, 00:16:43.230 "clone": false, 00:16:43.230 "esnap_clone": false 00:16:43.230 } 00:16:43.230 } 00:16:43.230 } 00:16:43.230 ]' 00:16:43.230 04:56:50 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:16:43.492 04:56:50 -- common/autotest_common.sh@1362 -- # bs=4096 00:16:43.492 04:56:50 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:16:43.492 04:56:50 -- common/autotest_common.sh@1363 -- # nb=26476544 00:16:43.492 04:56:50 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:16:43.492 04:56:50 -- common/autotest_common.sh@1367 -- # echo 103424 00:16:43.492 04:56:50 -- ftl/common.sh@48 -- # cache_size=5171 00:16:43.492 04:56:50 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:43.750 04:56:50 -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:16:43.750 04:56:50 -- ftl/fio.sh@51 -- # l2p_percentage=60 00:16:43.750 04:56:50 -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:16:43.750 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:16:43.750 04:56:50 -- ftl/fio.sh@56 -- # get_bdev_size a57926ce-1a68-4a28-88f9-9bd6a47a531b 00:16:43.750 04:56:50 -- common/autotest_common.sh@1357 -- # local bdev_name=a57926ce-1a68-4a28-88f9-9bd6a47a531b 00:16:43.750 04:56:50 -- common/autotest_common.sh@1358 -- # local bdev_info 00:16:43.750 04:56:50 -- common/autotest_common.sh@1359 -- # local bs 00:16:43.750 04:56:50 -- common/autotest_common.sh@1360 -- # local nb 00:16:43.750 04:56:50 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a57926ce-1a68-4a28-88f9-9bd6a47a531b 00:16:44.009 04:56:50 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:16:44.009 { 00:16:44.009 "name": "a57926ce-1a68-4a28-88f9-9bd6a47a531b", 00:16:44.009 "aliases": [ 00:16:44.009 "lvs/nvme0n1p0" 00:16:44.009 ], 00:16:44.009 "product_name": "Logical Volume", 00:16:44.009 "block_size": 4096, 00:16:44.009 "num_blocks": 26476544, 00:16:44.009 "uuid": "a57926ce-1a68-4a28-88f9-9bd6a47a531b", 00:16:44.009 "assigned_rate_limits": { 00:16:44.009 "rw_ios_per_sec": 0, 00:16:44.009 "rw_mbytes_per_sec": 0, 00:16:44.009 "r_mbytes_per_sec": 0, 00:16:44.009 "w_mbytes_per_sec": 0 00:16:44.009 }, 00:16:44.009 "claimed": false, 00:16:44.009 "zoned": false, 00:16:44.009 "supported_io_types": { 00:16:44.009 "read": true, 00:16:44.009 "write": true, 00:16:44.009 "unmap": true, 00:16:44.009 "write_zeroes": true, 00:16:44.009 "flush": false, 00:16:44.009 "reset": true, 00:16:44.009 "compare": false, 00:16:44.009 "compare_and_write": false, 00:16:44.009 "abort": false, 00:16:44.009 "nvme_admin": false, 00:16:44.009 "nvme_io": false 00:16:44.009 }, 00:16:44.009 "driver_specific": { 00:16:44.009 "lvol": { 00:16:44.009 "lvol_store_uuid": "36701353-e4d0-417b-9082-6b1b951d66c3", 00:16:44.009 "base_bdev": "nvme0n1", 00:16:44.009 "thin_provision": true, 00:16:44.009 "snapshot": false, 00:16:44.009 "clone": false, 00:16:44.009 "esnap_clone": false 00:16:44.009 } 00:16:44.009 } 00:16:44.009 } 00:16:44.009 ]' 00:16:44.009 04:56:50 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:16:44.009 04:56:50 -- common/autotest_common.sh@1362 -- # bs=4096 00:16:44.009 04:56:50 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:16:44.009 04:56:51 -- common/autotest_common.sh@1363 -- # nb=26476544 00:16:44.009 04:56:51 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:16:44.009 04:56:51 -- common/autotest_common.sh@1367 -- # echo 103424 00:16:44.009 04:56:51 -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:16:44.009 04:56:51 -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:16:44.009 04:56:51 -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d a57926ce-1a68-4a28-88f9-9bd6a47a531b -c nvc0n1p0 --l2p_dram_limit 60 00:16:44.303 [2024-05-12 04:56:51.245656] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.303 [2024-05-12 04:56:51.245722] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:44.303 [2024-05-12 04:56:51.245744] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:44.303 [2024-05-12 04:56:51.245757] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.303 [2024-05-12 04:56:51.245838] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.303 [2024-05-12 04:56:51.245858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:44.303 [2024-05-12 04:56:51.245890] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:16:44.303 [2024-05-12 04:56:51.245901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.303 [2024-05-12 04:56:51.245955] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:44.303 [2024-05-12 04:56:51.247269] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:44.303 [2024-05-12 04:56:51.247304] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.303 [2024-05-12 04:56:51.247322] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:44.303 [2024-05-12 04:56:51.247340] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.354 ms 00:16:44.303 [2024-05-12 04:56:51.247353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.303 [2024-05-12 04:56:51.247499] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 2ed0547c-81c4-499a-b3f8-8345337aee86 00:16:44.303 [2024-05-12 04:56:51.248599] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.303 [2024-05-12 04:56:51.248644] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:44.303 [2024-05-12 04:56:51.248664] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:16:44.303 [2024-05-12 04:56:51.248682] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.303 [2024-05-12 04:56:51.253193] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.303 [2024-05-12 04:56:51.253264] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:44.303 [2024-05-12 04:56:51.253282] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.414 ms 00:16:44.303 [2024-05-12 04:56:51.253300] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.303 [2024-05-12 04:56:51.253426] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.303 [2024-05-12 04:56:51.253450] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:44.303 [2024-05-12 04:56:51.253473] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:16:44.303 [2024-05-12 04:56:51.253497] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.304 [2024-05-12 04:56:51.253570] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.304 [2024-05-12 04:56:51.253592] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:44.304 [2024-05-12 04:56:51.253606] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:44.304 [2024-05-12 04:56:51.253622] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.304 [2024-05-12 04:56:51.253672] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:44.304 [2024-05-12 04:56:51.258365] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.304 [2024-05-12 04:56:51.258402] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:44.304 [2024-05-12 04:56:51.258422] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.702 ms 00:16:44.304 [2024-05-12 04:56:51.258437] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.304 [2024-05-12 04:56:51.258494] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.304 [2024-05-12 04:56:51.258510] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:44.304 [2024-05-12 04:56:51.258525] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:44.304 [2024-05-12 04:56:51.258537] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.304 [2024-05-12 04:56:51.258613] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:44.304 [2024-05-12 04:56:51.258760] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:44.304 [2024-05-12 04:56:51.258792] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:44.304 [2024-05-12 04:56:51.258809] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:44.304 [2024-05-12 04:56:51.258827] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:44.304 [2024-05-12 04:56:51.258843] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:44.304 [2024-05-12 04:56:51.258872] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:44.304 [2024-05-12 04:56:51.258884] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:44.304 [2024-05-12 04:56:51.258899] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:44.304 [2024-05-12 04:56:51.258911] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:44.304 [2024-05-12 04:56:51.258927] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.304 [2024-05-12 04:56:51.258941] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:44.304 [2024-05-12 04:56:51.258956] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:16:44.304 [2024-05-12 04:56:51.258968] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.304 [2024-05-12 04:56:51.259070] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.304 [2024-05-12 04:56:51.259093] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:44.304 [2024-05-12 04:56:51.259109] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:16:44.304 [2024-05-12 04:56:51.259121] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.304 [2024-05-12 04:56:51.259245] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:44.304 [2024-05-12 04:56:51.259268] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:44.304 [2024-05-12 04:56:51.259287] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:44.304 [2024-05-12 04:56:51.259303] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:44.304 [2024-05-12 04:56:51.259318] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:44.304 [2024-05-12 04:56:51.259330] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:44.304 [2024-05-12 04:56:51.259344] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:44.304 [2024-05-12 04:56:51.259356] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:44.304 [2024-05-12 04:56:51.259372] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:44.304 [2024-05-12 04:56:51.259390] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:44.304 [2024-05-12 04:56:51.259405] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:44.304 [2024-05-12 04:56:51.259417] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:44.304 [2024-05-12 04:56:51.259432] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:44.304 [2024-05-12 04:56:51.259443] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:44.304 [2024-05-12 04:56:51.259456] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:16:44.304 [2024-05-12 04:56:51.259468] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:44.304 [2024-05-12 04:56:51.259482] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:44.304 [2024-05-12 04:56:51.259494] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:16:44.304 [2024-05-12 04:56:51.259507] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:44.304 [2024-05-12 04:56:51.259518] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:44.304 [2024-05-12 04:56:51.259531] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:16:44.304 [2024-05-12 04:56:51.259542] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:44.304 [2024-05-12 04:56:51.259556] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:44.304 [2024-05-12 04:56:51.259567] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:44.304 [2024-05-12 04:56:51.259580] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:44.304 [2024-05-12 04:56:51.259591] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:44.304 [2024-05-12 04:56:51.259604] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:16:44.304 [2024-05-12 04:56:51.259615] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:44.304 [2024-05-12 04:56:51.259628] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:44.304 [2024-05-12 04:56:51.259639] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:44.304 [2024-05-12 04:56:51.259652] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:44.304 [2024-05-12 04:56:51.259663] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:44.304 [2024-05-12 04:56:51.259678] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:16:44.304 [2024-05-12 04:56:51.259690] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:44.304 [2024-05-12 04:56:51.259703] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:44.304 [2024-05-12 04:56:51.259714] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:44.304 [2024-05-12 04:56:51.259729] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:44.304 [2024-05-12 04:56:51.259741] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:44.304 [2024-05-12 04:56:51.259777] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:16:44.304 [2024-05-12 04:56:51.259789] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:44.304 [2024-05-12 04:56:51.259805] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:44.304 [2024-05-12 04:56:51.259821] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:44.304 [2024-05-12 04:56:51.259847] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:44.304 [2024-05-12 04:56:51.259861] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:44.304 [2024-05-12 04:56:51.259876] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:44.304 [2024-05-12 04:56:51.259888] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:44.304 [2024-05-12 04:56:51.259904] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:44.304 [2024-05-12 04:56:51.259916] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:44.304 [2024-05-12 04:56:51.259932] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:44.304 [2024-05-12 04:56:51.259943] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:44.304 [2024-05-12 04:56:51.259959] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:44.304 [2024-05-12 04:56:51.259973] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:44.304 [2024-05-12 04:56:51.259992] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:44.304 [2024-05-12 04:56:51.260006] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:16:44.304 [2024-05-12 04:56:51.260020] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:16:44.304 [2024-05-12 04:56:51.260033] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:16:44.304 [2024-05-12 04:56:51.260048] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:16:44.304 [2024-05-12 04:56:51.260061] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:16:44.304 [2024-05-12 04:56:51.260078] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:16:44.304 [2024-05-12 04:56:51.260091] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:16:44.305 [2024-05-12 04:56:51.260105] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:16:44.305 [2024-05-12 04:56:51.260117] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:16:44.305 [2024-05-12 04:56:51.260131] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:16:44.305 [2024-05-12 04:56:51.260144] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:16:44.305 [2024-05-12 04:56:51.260169] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:16:44.305 [2024-05-12 04:56:51.260183] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:44.305 [2024-05-12 04:56:51.260199] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:44.305 [2024-05-12 04:56:51.260212] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:44.305 [2024-05-12 04:56:51.260242] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:44.305 [2024-05-12 04:56:51.260255] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:44.305 [2024-05-12 04:56:51.260270] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:44.305 [2024-05-12 04:56:51.260285] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.305 [2024-05-12 04:56:51.260307] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:44.305 [2024-05-12 04:56:51.260323] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.105 ms 00:16:44.305 [2024-05-12 04:56:51.260346] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.305 [2024-05-12 04:56:51.278284] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.305 [2024-05-12 04:56:51.278349] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:44.305 [2024-05-12 04:56:51.278368] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.859 ms 00:16:44.305 [2024-05-12 04:56:51.278381] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.305 [2024-05-12 04:56:51.278499] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.305 [2024-05-12 04:56:51.278525] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:44.305 [2024-05-12 04:56:51.278539] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:16:44.305 [2024-05-12 04:56:51.278561] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.305 [2024-05-12 04:56:51.315586] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.305 [2024-05-12 04:56:51.315651] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:44.305 [2024-05-12 04:56:51.315669] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.951 ms 00:16:44.305 [2024-05-12 04:56:51.315685] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.305 [2024-05-12 04:56:51.315736] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.305 [2024-05-12 04:56:51.315753] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:44.305 [2024-05-12 04:56:51.315766] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:44.305 [2024-05-12 04:56:51.315779] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.305 [2024-05-12 04:56:51.316177] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.305 [2024-05-12 04:56:51.316206] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:44.305 [2024-05-12 04:56:51.316235] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:16:44.305 [2024-05-12 04:56:51.316254] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.305 [2024-05-12 04:56:51.316421] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.305 [2024-05-12 04:56:51.316445] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:44.305 [2024-05-12 04:56:51.316459] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:16:44.305 [2024-05-12 04:56:51.316473] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.305 [2024-05-12 04:56:51.343663] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.305 [2024-05-12 04:56:51.343722] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:44.305 [2024-05-12 04:56:51.343739] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.156 ms 00:16:44.305 [2024-05-12 04:56:51.343753] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.305 [2024-05-12 04:56:51.356642] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:44.305 [2024-05-12 04:56:51.370099] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.305 [2024-05-12 04:56:51.370166] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:44.305 [2024-05-12 04:56:51.370188] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.176 ms 00:16:44.305 [2024-05-12 04:56:51.370200] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.584 [2024-05-12 04:56:51.429910] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.584 [2024-05-12 04:56:51.429985] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:44.584 [2024-05-12 04:56:51.430007] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.612 ms 00:16:44.584 [2024-05-12 04:56:51.430022] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.584 [2024-05-12 04:56:51.430090] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:16:44.584 [2024-05-12 04:56:51.430110] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:16:47.867 [2024-05-12 04:56:54.619021] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.867 [2024-05-12 04:56:54.619100] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:47.867 [2024-05-12 04:56:54.619123] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3188.954 ms 00:16:47.867 [2024-05-12 04:56:54.619136] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.867 [2024-05-12 04:56:54.619441] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.867 [2024-05-12 04:56:54.619464] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:47.867 [2024-05-12 04:56:54.619480] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:16:47.867 [2024-05-12 04:56:54.619492] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.867 [2024-05-12 04:56:54.649366] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.867 [2024-05-12 04:56:54.649431] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:47.867 [2024-05-12 04:56:54.649453] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.800 ms 00:16:47.867 [2024-05-12 04:56:54.649466] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.867 [2024-05-12 04:56:54.678368] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.867 [2024-05-12 04:56:54.678417] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:47.867 [2024-05-12 04:56:54.678439] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.847 ms 00:16:47.867 [2024-05-12 04:56:54.678450] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.867 [2024-05-12 04:56:54.678830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.867 [2024-05-12 04:56:54.678848] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:47.867 [2024-05-12 04:56:54.678862] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:16:47.867 [2024-05-12 04:56:54.678892] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.867 [2024-05-12 04:56:54.752043] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.867 [2024-05-12 04:56:54.752118] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:47.867 [2024-05-12 04:56:54.752144] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.082 ms 00:16:47.867 [2024-05-12 04:56:54.752158] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.867 [2024-05-12 04:56:54.784096] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.867 [2024-05-12 04:56:54.784137] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:47.867 [2024-05-12 04:56:54.784158] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.842 ms 00:16:47.867 [2024-05-12 04:56:54.784171] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.867 [2024-05-12 04:56:54.788349] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.867 [2024-05-12 04:56:54.788380] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:47.867 [2024-05-12 04:56:54.788415] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.091 ms 00:16:47.867 [2024-05-12 04:56:54.788426] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.867 [2024-05-12 04:56:54.819557] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.867 [2024-05-12 04:56:54.819607] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:47.867 [2024-05-12 04:56:54.819641] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.061 ms 00:16:47.867 [2024-05-12 04:56:54.819653] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.867 [2024-05-12 04:56:54.819728] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.867 [2024-05-12 04:56:54.819748] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:47.867 [2024-05-12 04:56:54.819763] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:16:47.867 [2024-05-12 04:56:54.819774] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.867 [2024-05-12 04:56:54.819926] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.867 [2024-05-12 04:56:54.819947] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:47.867 [2024-05-12 04:56:54.819963] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:16:47.867 [2024-05-12 04:56:54.819975] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.867 [2024-05-12 04:56:54.821118] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3574.992 ms, result 0 00:16:47.867 { 00:16:47.867 "name": "ftl0", 00:16:47.867 "uuid": "2ed0547c-81c4-499a-b3f8-8345337aee86" 00:16:47.867 } 00:16:47.867 04:56:54 -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:16:47.868 04:56:54 -- common/autotest_common.sh@887 -- # local bdev_name=ftl0 00:16:47.868 04:56:54 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:47.868 04:56:54 -- common/autotest_common.sh@889 -- # local i 00:16:47.868 04:56:54 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:47.868 04:56:54 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:47.868 04:56:54 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:48.125 04:56:55 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:48.383 [ 00:16:48.383 { 00:16:48.383 "name": "ftl0", 00:16:48.383 "aliases": [ 00:16:48.383 "2ed0547c-81c4-499a-b3f8-8345337aee86" 00:16:48.383 ], 00:16:48.383 "product_name": "FTL disk", 00:16:48.383 "block_size": 4096, 00:16:48.383 "num_blocks": 20971520, 00:16:48.383 "uuid": "2ed0547c-81c4-499a-b3f8-8345337aee86", 00:16:48.383 "assigned_rate_limits": { 00:16:48.383 "rw_ios_per_sec": 0, 00:16:48.383 "rw_mbytes_per_sec": 0, 00:16:48.383 "r_mbytes_per_sec": 0, 00:16:48.383 "w_mbytes_per_sec": 0 00:16:48.383 }, 00:16:48.383 "claimed": false, 00:16:48.383 "zoned": false, 00:16:48.383 "supported_io_types": { 00:16:48.383 "read": true, 00:16:48.383 "write": true, 00:16:48.383 "unmap": true, 00:16:48.383 "write_zeroes": true, 00:16:48.383 "flush": true, 00:16:48.383 "reset": false, 00:16:48.383 "compare": false, 00:16:48.383 "compare_and_write": false, 00:16:48.383 "abort": false, 00:16:48.383 "nvme_admin": false, 00:16:48.383 "nvme_io": false 00:16:48.383 }, 00:16:48.383 "driver_specific": { 00:16:48.383 "ftl": { 00:16:48.383 "base_bdev": "a57926ce-1a68-4a28-88f9-9bd6a47a531b", 00:16:48.383 "cache": "nvc0n1p0" 00:16:48.383 } 00:16:48.383 } 00:16:48.383 } 00:16:48.383 ] 00:16:48.383 04:56:55 -- common/autotest_common.sh@895 -- # return 0 00:16:48.383 04:56:55 -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:16:48.383 04:56:55 -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:48.641 04:56:55 -- ftl/fio.sh@70 -- # echo ']}' 00:16:48.641 04:56:55 -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:48.898 [2024-05-12 04:56:55.862345] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.898 [2024-05-12 04:56:55.862422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:48.898 [2024-05-12 04:56:55.862442] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:48.898 [2024-05-12 04:56:55.862457] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.898 [2024-05-12 04:56:55.862503] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:48.898 [2024-05-12 04:56:55.865780] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.898 [2024-05-12 04:56:55.865807] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:48.898 [2024-05-12 04:56:55.865839] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.250 ms 00:16:48.898 [2024-05-12 04:56:55.865851] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.898 [2024-05-12 04:56:55.866393] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.898 [2024-05-12 04:56:55.866421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:48.898 [2024-05-12 04:56:55.866453] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.502 ms 00:16:48.898 [2024-05-12 04:56:55.866465] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.898 [2024-05-12 04:56:55.869955] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.898 [2024-05-12 04:56:55.869980] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:48.898 [2024-05-12 04:56:55.870012] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.459 ms 00:16:48.898 [2024-05-12 04:56:55.870024] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.898 [2024-05-12 04:56:55.876416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.898 [2024-05-12 04:56:55.876441] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:48.898 [2024-05-12 04:56:55.876475] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.352 ms 00:16:48.898 [2024-05-12 04:56:55.876486] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.898 [2024-05-12 04:56:55.905764] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.898 [2024-05-12 04:56:55.905814] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:48.898 [2024-05-12 04:56:55.905833] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.164 ms 00:16:48.898 [2024-05-12 04:56:55.905845] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.898 [2024-05-12 04:56:55.923819] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.898 [2024-05-12 04:56:55.923892] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:48.898 [2024-05-12 04:56:55.923929] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.918 ms 00:16:48.898 [2024-05-12 04:56:55.923942] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.898 [2024-05-12 04:56:55.924173] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.898 [2024-05-12 04:56:55.924211] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:48.898 [2024-05-12 04:56:55.924250] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.167 ms 00:16:48.898 [2024-05-12 04:56:55.924285] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.898 [2024-05-12 04:56:55.953898] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.898 [2024-05-12 04:56:55.953947] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:48.898 [2024-05-12 04:56:55.953965] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.566 ms 00:16:48.898 [2024-05-12 04:56:55.953976] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.898 [2024-05-12 04:56:55.983783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.898 [2024-05-12 04:56:55.983832] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:48.899 [2024-05-12 04:56:55.983878] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.747 ms 00:16:48.899 [2024-05-12 04:56:55.983892] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.899 [2024-05-12 04:56:56.012567] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.899 [2024-05-12 04:56:56.012617] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:48.899 [2024-05-12 04:56:56.012651] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.612 ms 00:16:48.899 [2024-05-12 04:56:56.012662] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.158 [2024-05-12 04:56:56.042337] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.158 [2024-05-12 04:56:56.042387] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:49.158 [2024-05-12 04:56:56.042406] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.540 ms 00:16:49.158 [2024-05-12 04:56:56.042417] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.158 [2024-05-12 04:56:56.042475] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:49.158 [2024-05-12 04:56:56.042496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.042512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.042524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.042537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.042548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.042562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.042573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.042586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.042597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.042610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.042654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.042684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.042697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.042711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.042723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.042740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.042753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.042767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.042779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.042795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.042807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.042822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.042834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.042848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.042860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.042874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.042887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.042905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.042920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.042934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.042947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.042963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.042976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.042991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.043004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.043018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.043030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.043044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.043056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.043070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.043083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.043097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.043109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.043123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.043135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.043149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.043161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.043180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.043192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.043206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.043218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.043232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.043245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.043259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.043286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.043301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.043314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.043328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.043341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.043357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.043370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.043384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.043396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.043413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.043426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.043440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.043452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.043466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.043479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.043493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.043505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.043519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.043531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.043547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.043560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:49.158 [2024-05-12 04:56:56.043574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:49.159 [2024-05-12 04:56:56.043586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:49.159 [2024-05-12 04:56:56.043600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:49.159 [2024-05-12 04:56:56.043612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:49.159 [2024-05-12 04:56:56.043628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:49.159 [2024-05-12 04:56:56.043641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:49.159 [2024-05-12 04:56:56.043655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:49.159 [2024-05-12 04:56:56.043667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:49.159 [2024-05-12 04:56:56.043681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:49.159 [2024-05-12 04:56:56.043694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:49.159 [2024-05-12 04:56:56.043709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:49.159 [2024-05-12 04:56:56.043721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:49.159 [2024-05-12 04:56:56.043735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:49.159 [2024-05-12 04:56:56.043747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:49.159 [2024-05-12 04:56:56.043762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:49.159 [2024-05-12 04:56:56.043774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:49.159 [2024-05-12 04:56:56.043791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:49.159 [2024-05-12 04:56:56.043804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:49.159 [2024-05-12 04:56:56.043818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:49.159 [2024-05-12 04:56:56.043861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:49.159 [2024-05-12 04:56:56.043880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:49.159 [2024-05-12 04:56:56.043894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:49.159 [2024-05-12 04:56:56.043910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:49.159 [2024-05-12 04:56:56.043923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:49.159 [2024-05-12 04:56:56.043937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:49.159 [2024-05-12 04:56:56.043958] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:49.159 [2024-05-12 04:56:56.043976] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2ed0547c-81c4-499a-b3f8-8345337aee86 00:16:49.159 [2024-05-12 04:56:56.043989] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:49.159 [2024-05-12 04:56:56.044002] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:49.159 [2024-05-12 04:56:56.044014] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:49.159 [2024-05-12 04:56:56.044028] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:49.159 [2024-05-12 04:56:56.044039] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:49.159 [2024-05-12 04:56:56.044054] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:49.159 [2024-05-12 04:56:56.044065] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:49.159 [2024-05-12 04:56:56.044078] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:49.159 [2024-05-12 04:56:56.044089] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:49.159 [2024-05-12 04:56:56.044105] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.159 [2024-05-12 04:56:56.044117] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:49.159 [2024-05-12 04:56:56.044132] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.633 ms 00:16:49.159 [2024-05-12 04:56:56.044144] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.159 [2024-05-12 04:56:56.060215] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.159 [2024-05-12 04:56:56.060306] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:49.159 [2024-05-12 04:56:56.060325] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.964 ms 00:16:49.159 [2024-05-12 04:56:56.060338] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.159 [2024-05-12 04:56:56.060566] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.159 [2024-05-12 04:56:56.060586] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:49.159 [2024-05-12 04:56:56.060602] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:16:49.159 [2024-05-12 04:56:56.060616] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.159 [2024-05-12 04:56:56.114251] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.159 [2024-05-12 04:56:56.114313] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:49.159 [2024-05-12 04:56:56.114349] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.159 [2024-05-12 04:56:56.114361] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.159 [2024-05-12 04:56:56.114440] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.159 [2024-05-12 04:56:56.114455] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:49.159 [2024-05-12 04:56:56.114469] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.159 [2024-05-12 04:56:56.114483] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.159 [2024-05-12 04:56:56.114605] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.159 [2024-05-12 04:56:56.114624] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:49.159 [2024-05-12 04:56:56.114640] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.159 [2024-05-12 04:56:56.114651] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.159 [2024-05-12 04:56:56.114689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.159 [2024-05-12 04:56:56.114702] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:49.159 [2024-05-12 04:56:56.114715] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.159 [2024-05-12 04:56:56.114727] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.159 [2024-05-12 04:56:56.217785] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.159 [2024-05-12 04:56:56.217854] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:49.159 [2024-05-12 04:56:56.217873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.159 [2024-05-12 04:56:56.217885] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.159 [2024-05-12 04:56:56.253526] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.159 [2024-05-12 04:56:56.253581] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:49.159 [2024-05-12 04:56:56.253599] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.159 [2024-05-12 04:56:56.253614] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.159 [2024-05-12 04:56:56.253710] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.159 [2024-05-12 04:56:56.253740] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:49.159 [2024-05-12 04:56:56.253754] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.159 [2024-05-12 04:56:56.253766] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.159 [2024-05-12 04:56:56.253848] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.159 [2024-05-12 04:56:56.253864] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:49.159 [2024-05-12 04:56:56.253878] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.159 [2024-05-12 04:56:56.253889] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.159 [2024-05-12 04:56:56.254020] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.159 [2024-05-12 04:56:56.254038] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:49.159 [2024-05-12 04:56:56.254052] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.159 [2024-05-12 04:56:56.254064] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.159 [2024-05-12 04:56:56.254140] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.159 [2024-05-12 04:56:56.254158] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:49.159 [2024-05-12 04:56:56.254173] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.159 [2024-05-12 04:56:56.254184] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.159 [2024-05-12 04:56:56.254296] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.159 [2024-05-12 04:56:56.254314] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:49.159 [2024-05-12 04:56:56.254329] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.159 [2024-05-12 04:56:56.254341] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.159 [2024-05-12 04:56:56.254403] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.159 [2024-05-12 04:56:56.254420] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:49.159 [2024-05-12 04:56:56.254441] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.159 [2024-05-12 04:56:56.254453] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.159 [2024-05-12 04:56:56.254635] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 392.247 ms, result 0 00:16:49.159 true 00:16:49.159 04:56:56 -- ftl/fio.sh@75 -- # killprocess 71950 00:16:49.159 04:56:56 -- common/autotest_common.sh@926 -- # '[' -z 71950 ']' 00:16:49.159 04:56:56 -- common/autotest_common.sh@930 -- # kill -0 71950 00:16:49.159 04:56:56 -- common/autotest_common.sh@931 -- # uname 00:16:49.159 04:56:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:49.418 04:56:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71950 00:16:49.418 killing process with pid 71950 00:16:49.418 04:56:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:49.418 04:56:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:49.418 04:56:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71950' 00:16:49.418 04:56:56 -- common/autotest_common.sh@945 -- # kill 71950 00:16:49.418 04:56:56 -- common/autotest_common.sh@950 -- # wait 71950 00:16:53.603 04:57:00 -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:16:53.603 04:57:00 -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:53.603 04:57:00 -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:16:53.603 04:57:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:53.603 04:57:00 -- common/autotest_common.sh@10 -- # set +x 00:16:53.603 04:57:00 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:53.603 04:57:00 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:53.603 04:57:00 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:16:53.603 04:57:00 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:53.603 04:57:00 -- common/autotest_common.sh@1318 -- # local sanitizers 00:16:53.603 04:57:00 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:53.603 04:57:00 -- common/autotest_common.sh@1320 -- # shift 00:16:53.603 04:57:00 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:16:53.603 04:57:00 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:16:53.603 04:57:00 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:53.603 04:57:00 -- common/autotest_common.sh@1324 -- # grep libasan 00:16:53.603 04:57:00 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:16:53.603 04:57:00 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:53.603 04:57:00 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:53.603 04:57:00 -- common/autotest_common.sh@1326 -- # break 00:16:53.603 04:57:00 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:53.603 04:57:00 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:53.862 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:16:53.862 fio-3.35 00:16:53.862 Starting 1 thread 00:16:59.131 00:16:59.131 test: (groupid=0, jobs=1): err= 0: pid=72168: Sun May 12 04:57:06 2024 00:16:59.131 read: IOPS=938, BW=62.3MiB/s (65.4MB/s)(255MiB/4084msec) 00:16:59.131 slat (nsec): min=5290, max=40491, avg=7236.30, stdev=3213.78 00:16:59.131 clat (usec): min=338, max=922, avg=473.85, stdev=47.87 00:16:59.131 lat (usec): min=345, max=928, avg=481.09, stdev=48.61 00:16:59.131 clat percentiles (usec): 00:16:59.131 | 1.00th=[ 371], 5.00th=[ 408], 10.00th=[ 429], 20.00th=[ 441], 00:16:59.131 | 30.00th=[ 449], 40.00th=[ 457], 50.00th=[ 465], 60.00th=[ 478], 00:16:59.131 | 70.00th=[ 486], 80.00th=[ 506], 90.00th=[ 545], 95.00th=[ 570], 00:16:59.131 | 99.00th=[ 611], 99.50th=[ 627], 99.90th=[ 676], 99.95th=[ 766], 00:16:59.131 | 99.99th=[ 922] 00:16:59.131 write: IOPS=945, BW=62.8MiB/s (65.8MB/s)(256MiB/4080msec); 0 zone resets 00:16:59.131 slat (nsec): min=18849, max=87602, avg=24610.75, stdev=5691.22 00:16:59.131 clat (usec): min=358, max=1084, avg=541.94, stdev=60.91 00:16:59.131 lat (usec): min=384, max=1106, avg=566.55, stdev=61.21 00:16:59.131 clat percentiles (usec): 00:16:59.131 | 1.00th=[ 433], 5.00th=[ 461], 10.00th=[ 478], 20.00th=[ 498], 00:16:59.131 | 30.00th=[ 515], 40.00th=[ 529], 50.00th=[ 537], 60.00th=[ 545], 00:16:59.131 | 70.00th=[ 562], 80.00th=[ 578], 90.00th=[ 611], 95.00th=[ 635], 00:16:59.131 | 99.00th=[ 791], 99.50th=[ 840], 99.90th=[ 947], 99.95th=[ 988], 00:16:59.131 | 99.99th=[ 1090] 00:16:59.131 bw ( KiB/s): min=61336, max=65552, per=100.00%, avg=64294.00, stdev=1397.84, samples=8 00:16:59.131 iops : min= 902, max= 964, avg=945.50, stdev=20.56, samples=8 00:16:59.131 lat (usec) : 500=49.37%, 750=49.92%, 1000=0.70% 00:16:59.131 lat (msec) : 2=0.01% 00:16:59.131 cpu : usr=99.14%, sys=0.17%, ctx=28, majf=0, minf=1318 00:16:59.131 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:59.131 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.131 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.131 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:59.131 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:59.131 00:16:59.131 Run status group 0 (all jobs): 00:16:59.131 READ: bw=62.3MiB/s (65.4MB/s), 62.3MiB/s-62.3MiB/s (65.4MB/s-65.4MB/s), io=255MiB (267MB), run=4084-4084msec 00:16:59.131 WRITE: bw=62.8MiB/s (65.8MB/s), 62.8MiB/s-62.8MiB/s (65.8MB/s-65.8MB/s), io=256MiB (269MB), run=4080-4080msec 00:17:00.509 ----------------------------------------------------- 00:17:00.509 Suppressions used: 00:17:00.509 count bytes template 00:17:00.509 1 5 /usr/src/fio/parse.c 00:17:00.509 1 8 libtcmalloc_minimal.so 00:17:00.509 1 904 libcrypto.so 00:17:00.509 ----------------------------------------------------- 00:17:00.509 00:17:00.509 04:57:07 -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:17:00.509 04:57:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:00.509 04:57:07 -- common/autotest_common.sh@10 -- # set +x 00:17:00.509 04:57:07 -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:00.509 04:57:07 -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:17:00.509 04:57:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:00.509 04:57:07 -- common/autotest_common.sh@10 -- # set +x 00:17:00.509 04:57:07 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:00.509 04:57:07 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:00.509 04:57:07 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:17:00.509 04:57:07 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:00.509 04:57:07 -- common/autotest_common.sh@1318 -- # local sanitizers 00:17:00.509 04:57:07 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:00.509 04:57:07 -- common/autotest_common.sh@1320 -- # shift 00:17:00.509 04:57:07 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:17:00.509 04:57:07 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:17:00.509 04:57:07 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:00.509 04:57:07 -- common/autotest_common.sh@1324 -- # grep libasan 00:17:00.509 04:57:07 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:17:00.509 04:57:07 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:00.509 04:57:07 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:00.509 04:57:07 -- common/autotest_common.sh@1326 -- # break 00:17:00.509 04:57:07 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:00.509 04:57:07 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:00.768 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:00.768 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:00.768 fio-3.35 00:17:00.768 Starting 2 threads 00:17:32.846 00:17:32.846 first_half: (groupid=0, jobs=1): err= 0: pid=72266: Sun May 12 04:57:37 2024 00:17:32.846 read: IOPS=2302, BW=9212KiB/s (9433kB/s)(255MiB/28332msec) 00:17:32.846 slat (nsec): min=4369, max=44829, avg=7094.60, stdev=2001.37 00:17:32.846 clat (usec): min=879, max=299053, avg=41845.56, stdev=19226.21 00:17:32.846 lat (usec): min=887, max=299061, avg=41852.65, stdev=19226.35 00:17:32.846 clat percentiles (msec): 00:17:32.846 | 1.00th=[ 10], 5.00th=[ 37], 10.00th=[ 37], 20.00th=[ 37], 00:17:32.846 | 30.00th=[ 38], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 39], 00:17:32.846 | 70.00th=[ 40], 80.00th=[ 43], 90.00th=[ 45], 95.00th=[ 56], 00:17:32.846 | 99.00th=[ 155], 99.50th=[ 180], 99.90th=[ 247], 99.95th=[ 275], 00:17:32.846 | 99.99th=[ 292] 00:17:32.846 write: IOPS=2760, BW=10.8MiB/s (11.3MB/s)(256MiB/23743msec); 0 zone resets 00:17:32.846 slat (usec): min=5, max=414, avg= 9.07, stdev= 6.34 00:17:32.846 clat (usec): min=432, max=101894, avg=13586.43, stdev=23754.61 00:17:32.846 lat (usec): min=442, max=101901, avg=13595.50, stdev=23754.89 00:17:32.846 clat percentiles (usec): 00:17:32.846 | 1.00th=[ 1012], 5.00th=[ 1319], 10.00th=[ 1532], 20.00th=[ 1958], 00:17:32.846 | 30.00th=[ 3621], 40.00th=[ 5342], 50.00th=[ 6325], 60.00th=[ 6980], 00:17:32.846 | 70.00th=[ 8356], 80.00th=[ 12256], 90.00th=[ 17171], 95.00th=[ 88605], 00:17:32.846 | 99.00th=[ 94897], 99.50th=[ 96994], 99.90th=[100140], 99.95th=[100140], 00:17:32.846 | 99.99th=[101188] 00:17:32.846 bw ( KiB/s): min= 640, max=36616, per=81.87%, avg=18078.62, stdev=10139.99, samples=29 00:17:32.846 iops : min= 162, max= 9154, avg=4519.72, stdev=2534.87, samples=29 00:17:32.846 lat (usec) : 500=0.01%, 750=0.07%, 1000=0.38% 00:17:32.846 lat (msec) : 2=10.04%, 4=6.11%, 10=21.50%, 20=7.90%, 50=46.51% 00:17:32.846 lat (msec) : 100=6.41%, 250=1.02%, 500=0.05% 00:17:32.846 cpu : usr=99.15%, sys=0.26%, ctx=92, majf=0, minf=5565 00:17:32.846 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:17:32.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:32.846 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:32.847 issued rwts: total=65247,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:32.847 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:32.847 second_half: (groupid=0, jobs=1): err= 0: pid=72267: Sun May 12 04:57:37 2024 00:17:32.847 read: IOPS=2290, BW=9160KiB/s (9380kB/s)(255MiB/28464msec) 00:17:32.847 slat (nsec): min=4500, max=66047, avg=7081.29, stdev=1947.36 00:17:32.847 clat (usec): min=856, max=305093, avg=41888.98, stdev=20941.79 00:17:32.847 lat (usec): min=863, max=305100, avg=41896.06, stdev=20941.99 00:17:32.847 clat percentiles (msec): 00:17:32.847 | 1.00th=[ 9], 5.00th=[ 37], 10.00th=[ 37], 20.00th=[ 37], 00:17:32.847 | 30.00th=[ 38], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 39], 00:17:32.847 | 70.00th=[ 40], 80.00th=[ 42], 90.00th=[ 44], 95.00th=[ 53], 00:17:32.847 | 99.00th=[ 163], 99.50th=[ 178], 99.90th=[ 209], 99.95th=[ 228], 00:17:32.847 | 99.99th=[ 296] 00:17:32.847 write: IOPS=2895, BW=11.3MiB/s (11.9MB/s)(256MiB/22635msec); 0 zone resets 00:17:32.847 slat (usec): min=5, max=367, avg= 9.27, stdev= 5.30 00:17:32.847 clat (usec): min=460, max=101825, avg=13903.93, stdev=24532.93 00:17:32.847 lat (usec): min=474, max=101835, avg=13913.20, stdev=24533.05 00:17:32.847 clat percentiles (usec): 00:17:32.847 | 1.00th=[ 979], 5.00th=[ 1254], 10.00th=[ 1418], 20.00th=[ 1663], 00:17:32.847 | 30.00th=[ 1909], 40.00th=[ 2835], 50.00th=[ 4178], 60.00th=[ 6128], 00:17:32.847 | 70.00th=[ 8848], 80.00th=[ 13304], 90.00th=[ 42206], 95.00th=[ 88605], 00:17:32.847 | 99.00th=[ 94897], 99.50th=[ 96994], 99.90th=[ 99091], 99.95th=[100140], 00:17:32.847 | 99.99th=[101188] 00:17:32.847 bw ( KiB/s): min= 8, max=43488, per=87.94%, avg=19418.07, stdev=12194.97, samples=27 00:17:32.847 iops : min= 2, max=10872, avg=4854.52, stdev=3048.74, samples=27 00:17:32.847 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.53% 00:17:32.847 lat (msec) : 2=15.79%, 4=8.27%, 10=11.87%, 20=8.44%, 50=47.87% 00:17:32.847 lat (msec) : 100=5.68%, 250=1.50%, 500=0.01% 00:17:32.847 cpu : usr=99.15%, sys=0.29%, ctx=116, majf=0, minf=5562 00:17:32.847 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:17:32.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:32.847 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:32.847 issued rwts: total=65183,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:32.847 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:32.847 00:17:32.847 Run status group 0 (all jobs): 00:17:32.847 READ: bw=17.9MiB/s (18.8MB/s), 9160KiB/s-9212KiB/s (9380kB/s-9433kB/s), io=509MiB (534MB), run=28332-28464msec 00:17:32.847 WRITE: bw=21.6MiB/s (22.6MB/s), 10.8MiB/s-11.3MiB/s (11.3MB/s-11.9MB/s), io=512MiB (537MB), run=22635-23743msec 00:17:32.847 ----------------------------------------------------- 00:17:32.847 Suppressions used: 00:17:32.847 count bytes template 00:17:32.847 2 10 /usr/src/fio/parse.c 00:17:32.847 3 288 /usr/src/fio/iolog.c 00:17:32.847 1 8 libtcmalloc_minimal.so 00:17:32.847 1 904 libcrypto.so 00:17:32.847 ----------------------------------------------------- 00:17:32.847 00:17:32.847 04:57:39 -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:17:32.847 04:57:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:32.847 04:57:39 -- common/autotest_common.sh@10 -- # set +x 00:17:32.847 04:57:39 -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:32.847 04:57:39 -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:17:32.847 04:57:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:32.847 04:57:39 -- common/autotest_common.sh@10 -- # set +x 00:17:32.847 04:57:39 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:32.847 04:57:39 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:32.847 04:57:39 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:17:32.847 04:57:39 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:32.847 04:57:39 -- common/autotest_common.sh@1318 -- # local sanitizers 00:17:32.847 04:57:39 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:32.847 04:57:39 -- common/autotest_common.sh@1320 -- # shift 00:17:32.847 04:57:39 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:17:32.847 04:57:39 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:17:32.847 04:57:39 -- common/autotest_common.sh@1324 -- # grep libasan 00:17:32.847 04:57:39 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:32.847 04:57:39 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:17:32.847 04:57:39 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:32.847 04:57:39 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:32.847 04:57:39 -- common/autotest_common.sh@1326 -- # break 00:17:32.847 04:57:39 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:32.847 04:57:39 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:32.847 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:32.847 fio-3.35 00:17:32.847 Starting 1 thread 00:17:50.924 00:17:50.924 test: (groupid=0, jobs=1): err= 0: pid=72632: Sun May 12 04:57:56 2024 00:17:50.924 read: IOPS=6551, BW=25.6MiB/s (26.8MB/s)(255MiB/9952msec) 00:17:50.924 slat (nsec): min=4530, max=48450, avg=6724.13, stdev=2256.38 00:17:50.924 clat (usec): min=825, max=47062, avg=19526.42, stdev=1270.75 00:17:50.924 lat (usec): min=830, max=47069, avg=19533.14, stdev=1270.79 00:17:50.924 clat percentiles (usec): 00:17:50.924 | 1.00th=[18220], 5.00th=[18482], 10.00th=[18744], 20.00th=[19006], 00:17:50.924 | 30.00th=[19006], 40.00th=[19268], 50.00th=[19530], 60.00th=[19530], 00:17:50.924 | 70.00th=[19792], 80.00th=[19792], 90.00th=[20055], 95.00th=[20317], 00:17:50.924 | 99.00th=[25035], 99.50th=[25297], 99.90th=[37487], 99.95th=[42730], 00:17:50.924 | 99.99th=[46400] 00:17:50.924 write: IOPS=12.2k, BW=47.6MiB/s (49.9MB/s)(256MiB/5381msec); 0 zone resets 00:17:50.924 slat (usec): min=5, max=487, avg= 9.20, stdev= 5.66 00:17:50.924 clat (usec): min=659, max=67490, avg=10450.83, stdev=13362.01 00:17:50.924 lat (usec): min=667, max=67499, avg=10460.03, stdev=13362.08 00:17:50.924 clat percentiles (usec): 00:17:50.924 | 1.00th=[ 938], 5.00th=[ 1139], 10.00th=[ 1254], 20.00th=[ 1450], 00:17:50.924 | 30.00th=[ 1647], 40.00th=[ 2114], 50.00th=[ 6783], 60.00th=[ 7635], 00:17:50.924 | 70.00th=[ 8717], 80.00th=[10290], 90.00th=[38536], 95.00th=[41681], 00:17:50.924 | 99.00th=[45351], 99.50th=[47449], 99.90th=[50070], 99.95th=[55313], 00:17:50.924 | 99.99th=[62653] 00:17:50.924 bw ( KiB/s): min=30840, max=69168, per=97.81%, avg=47652.64, stdev=11028.58, samples=11 00:17:50.924 iops : min= 7710, max=17292, avg=11913.09, stdev=2757.10, samples=11 00:17:50.924 lat (usec) : 750=0.02%, 1000=0.90% 00:17:50.924 lat (msec) : 2=18.68%, 4=1.35%, 10=18.50%, 20=45.66%, 50=14.82% 00:17:50.924 lat (msec) : 100=0.06% 00:17:50.924 cpu : usr=98.63%, sys=0.76%, ctx=174, majf=0, minf=5567 00:17:50.924 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:17:50.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:50.924 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:50.924 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:50.924 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:50.924 00:17:50.924 Run status group 0 (all jobs): 00:17:50.924 READ: bw=25.6MiB/s (26.8MB/s), 25.6MiB/s-25.6MiB/s (26.8MB/s-26.8MB/s), io=255MiB (267MB), run=9952-9952msec 00:17:50.924 WRITE: bw=47.6MiB/s (49.9MB/s), 47.6MiB/s-47.6MiB/s (49.9MB/s-49.9MB/s), io=256MiB (268MB), run=5381-5381msec 00:17:50.924 ----------------------------------------------------- 00:17:50.924 Suppressions used: 00:17:50.924 count bytes template 00:17:50.924 1 5 /usr/src/fio/parse.c 00:17:50.924 2 192 /usr/src/fio/iolog.c 00:17:50.924 1 8 libtcmalloc_minimal.so 00:17:50.924 1 904 libcrypto.so 00:17:50.924 ----------------------------------------------------- 00:17:50.924 00:17:50.924 04:57:57 -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:17:50.924 04:57:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:50.924 04:57:57 -- common/autotest_common.sh@10 -- # set +x 00:17:50.924 04:57:57 -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:50.924 04:57:57 -- ftl/fio.sh@85 -- # remove_shm 00:17:50.924 Remove shared memory files 00:17:50.924 04:57:57 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:17:50.924 04:57:57 -- ftl/common.sh@205 -- # rm -f rm -f 00:17:50.924 04:57:57 -- ftl/common.sh@206 -- # rm -f rm -f 00:17:50.924 04:57:57 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid56913 /dev/shm/spdk_tgt_trace.pid70878 00:17:50.924 04:57:57 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:17:50.924 04:57:57 -- ftl/common.sh@209 -- # rm -f rm -f 00:17:50.924 00:17:50.924 real 1m11.240s 00:17:50.924 user 2m40.111s 00:17:50.924 sys 0m3.596s 00:17:50.924 04:57:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:50.924 04:57:57 -- common/autotest_common.sh@10 -- # set +x 00:17:50.924 ************************************ 00:17:50.924 END TEST ftl_fio_basic 00:17:50.924 ************************************ 00:17:50.924 04:57:57 -- ftl/ftl.sh@75 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:07.0 0000:00:06.0 00:17:50.924 04:57:57 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:17:50.924 04:57:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:50.924 04:57:57 -- common/autotest_common.sh@10 -- # set +x 00:17:50.924 ************************************ 00:17:50.924 START TEST ftl_bdevperf 00:17:50.924 ************************************ 00:17:50.924 04:57:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:07.0 0000:00:06.0 00:17:50.924 * Looking for test storage... 00:17:50.924 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:50.924 04:57:57 -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:50.924 04:57:57 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:17:50.924 04:57:57 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:50.924 04:57:57 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:50.924 04:57:57 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:50.924 04:57:57 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:50.924 04:57:57 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:50.924 04:57:57 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:50.924 04:57:57 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:50.924 04:57:57 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:50.924 04:57:57 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:50.924 04:57:57 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:50.924 04:57:57 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:50.924 04:57:57 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:50.924 04:57:57 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:50.924 04:57:57 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:50.924 04:57:57 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:50.924 04:57:57 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:50.924 04:57:57 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:50.924 04:57:57 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:50.924 04:57:57 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:50.924 04:57:57 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:50.924 04:57:57 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:50.924 04:57:57 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:50.924 04:57:57 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:50.924 04:57:57 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:50.924 04:57:57 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:50.924 04:57:57 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:50.924 04:57:57 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:50.924 04:57:57 -- ftl/bdevperf.sh@11 -- # device=0000:00:07.0 00:17:50.924 04:57:57 -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:06.0 00:17:50.924 04:57:57 -- ftl/bdevperf.sh@13 -- # use_append= 00:17:50.924 04:57:57 -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:50.924 04:57:57 -- ftl/bdevperf.sh@15 -- # timeout=240 00:17:50.924 04:57:57 -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:17:50.924 04:57:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:50.924 04:57:57 -- common/autotest_common.sh@10 -- # set +x 00:17:50.924 04:57:57 -- ftl/bdevperf.sh@19 -- # bdevperf_pid=72873 00:17:50.924 04:57:57 -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:17:50.924 04:57:57 -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:17:50.924 04:57:57 -- ftl/bdevperf.sh@22 -- # waitforlisten 72873 00:17:50.924 04:57:57 -- common/autotest_common.sh@819 -- # '[' -z 72873 ']' 00:17:50.924 04:57:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.924 04:57:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:50.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.924 04:57:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.924 04:57:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:50.924 04:57:57 -- common/autotest_common.sh@10 -- # set +x 00:17:50.924 [2024-05-12 04:57:57.822928] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:50.924 [2024-05-12 04:57:57.823104] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72873 ] 00:17:50.924 [2024-05-12 04:57:57.994108] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.181 [2024-05-12 04:57:58.161669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.746 04:57:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:51.746 04:57:58 -- common/autotest_common.sh@852 -- # return 0 00:17:51.746 04:57:58 -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:17:51.746 04:57:58 -- ftl/common.sh@54 -- # local name=nvme0 00:17:51.746 04:57:58 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:17:51.746 04:57:58 -- ftl/common.sh@56 -- # local size=103424 00:17:51.746 04:57:58 -- ftl/common.sh@59 -- # local base_bdev 00:17:51.746 04:57:58 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:17:52.005 04:57:59 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:52.005 04:57:59 -- ftl/common.sh@62 -- # local base_size 00:17:52.005 04:57:59 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:52.005 04:57:59 -- common/autotest_common.sh@1357 -- # local bdev_name=nvme0n1 00:17:52.005 04:57:59 -- common/autotest_common.sh@1358 -- # local bdev_info 00:17:52.005 04:57:59 -- common/autotest_common.sh@1359 -- # local bs 00:17:52.005 04:57:59 -- common/autotest_common.sh@1360 -- # local nb 00:17:52.005 04:57:59 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:52.263 04:57:59 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:17:52.263 { 00:17:52.263 "name": "nvme0n1", 00:17:52.263 "aliases": [ 00:17:52.263 "7c8f7cde-376b-4b00-a0f2-c4abaa06ab07" 00:17:52.263 ], 00:17:52.263 "product_name": "NVMe disk", 00:17:52.263 "block_size": 4096, 00:17:52.263 "num_blocks": 1310720, 00:17:52.263 "uuid": "7c8f7cde-376b-4b00-a0f2-c4abaa06ab07", 00:17:52.263 "assigned_rate_limits": { 00:17:52.263 "rw_ios_per_sec": 0, 00:17:52.263 "rw_mbytes_per_sec": 0, 00:17:52.263 "r_mbytes_per_sec": 0, 00:17:52.263 "w_mbytes_per_sec": 0 00:17:52.263 }, 00:17:52.263 "claimed": true, 00:17:52.263 "claim_type": "read_many_write_one", 00:17:52.263 "zoned": false, 00:17:52.263 "supported_io_types": { 00:17:52.263 "read": true, 00:17:52.263 "write": true, 00:17:52.263 "unmap": true, 00:17:52.263 "write_zeroes": true, 00:17:52.263 "flush": true, 00:17:52.263 "reset": true, 00:17:52.263 "compare": true, 00:17:52.263 "compare_and_write": false, 00:17:52.263 "abort": true, 00:17:52.263 "nvme_admin": true, 00:17:52.263 "nvme_io": true 00:17:52.263 }, 00:17:52.263 "driver_specific": { 00:17:52.263 "nvme": [ 00:17:52.263 { 00:17:52.263 "pci_address": "0000:00:07.0", 00:17:52.263 "trid": { 00:17:52.263 "trtype": "PCIe", 00:17:52.263 "traddr": "0000:00:07.0" 00:17:52.263 }, 00:17:52.263 "ctrlr_data": { 00:17:52.263 "cntlid": 0, 00:17:52.263 "vendor_id": "0x1b36", 00:17:52.263 "model_number": "QEMU NVMe Ctrl", 00:17:52.263 "serial_number": "12341", 00:17:52.263 "firmware_revision": "8.0.0", 00:17:52.263 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:52.263 "oacs": { 00:17:52.263 "security": 0, 00:17:52.263 "format": 1, 00:17:52.264 "firmware": 0, 00:17:52.264 "ns_manage": 1 00:17:52.264 }, 00:17:52.264 "multi_ctrlr": false, 00:17:52.264 "ana_reporting": false 00:17:52.264 }, 00:17:52.264 "vs": { 00:17:52.264 "nvme_version": "1.4" 00:17:52.264 }, 00:17:52.264 "ns_data": { 00:17:52.264 "id": 1, 00:17:52.264 "can_share": false 00:17:52.264 } 00:17:52.264 } 00:17:52.264 ], 00:17:52.264 "mp_policy": "active_passive" 00:17:52.264 } 00:17:52.264 } 00:17:52.264 ]' 00:17:52.264 04:57:59 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:17:52.264 04:57:59 -- common/autotest_common.sh@1362 -- # bs=4096 00:17:52.264 04:57:59 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:17:52.522 04:57:59 -- common/autotest_common.sh@1363 -- # nb=1310720 00:17:52.522 04:57:59 -- common/autotest_common.sh@1366 -- # bdev_size=5120 00:17:52.522 04:57:59 -- common/autotest_common.sh@1367 -- # echo 5120 00:17:52.522 04:57:59 -- ftl/common.sh@63 -- # base_size=5120 00:17:52.522 04:57:59 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:52.522 04:57:59 -- ftl/common.sh@67 -- # clear_lvols 00:17:52.522 04:57:59 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:52.522 04:57:59 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:52.522 04:57:59 -- ftl/common.sh@28 -- # stores=36701353-e4d0-417b-9082-6b1b951d66c3 00:17:52.522 04:57:59 -- ftl/common.sh@29 -- # for lvs in $stores 00:17:52.522 04:57:59 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 36701353-e4d0-417b-9082-6b1b951d66c3 00:17:52.780 04:57:59 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:53.038 04:58:00 -- ftl/common.sh@68 -- # lvs=260f386d-ff0e-4aaa-b9cf-077fb7e7aac6 00:17:53.038 04:58:00 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 260f386d-ff0e-4aaa-b9cf-077fb7e7aac6 00:17:53.297 04:58:00 -- ftl/bdevperf.sh@23 -- # split_bdev=3fb15a5b-3b25-418f-a026-2d7da8f9c9c5 00:17:53.297 04:58:00 -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:06.0 3fb15a5b-3b25-418f-a026-2d7da8f9c9c5 00:17:53.297 04:58:00 -- ftl/common.sh@35 -- # local name=nvc0 00:17:53.297 04:58:00 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:17:53.297 04:58:00 -- ftl/common.sh@37 -- # local base_bdev=3fb15a5b-3b25-418f-a026-2d7da8f9c9c5 00:17:53.297 04:58:00 -- ftl/common.sh@38 -- # local cache_size= 00:17:53.297 04:58:00 -- ftl/common.sh@41 -- # get_bdev_size 3fb15a5b-3b25-418f-a026-2d7da8f9c9c5 00:17:53.297 04:58:00 -- common/autotest_common.sh@1357 -- # local bdev_name=3fb15a5b-3b25-418f-a026-2d7da8f9c9c5 00:17:53.297 04:58:00 -- common/autotest_common.sh@1358 -- # local bdev_info 00:17:53.297 04:58:00 -- common/autotest_common.sh@1359 -- # local bs 00:17:53.297 04:58:00 -- common/autotest_common.sh@1360 -- # local nb 00:17:53.297 04:58:00 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3fb15a5b-3b25-418f-a026-2d7da8f9c9c5 00:17:53.555 04:58:00 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:17:53.555 { 00:17:53.555 "name": "3fb15a5b-3b25-418f-a026-2d7da8f9c9c5", 00:17:53.555 "aliases": [ 00:17:53.555 "lvs/nvme0n1p0" 00:17:53.555 ], 00:17:53.555 "product_name": "Logical Volume", 00:17:53.555 "block_size": 4096, 00:17:53.555 "num_blocks": 26476544, 00:17:53.555 "uuid": "3fb15a5b-3b25-418f-a026-2d7da8f9c9c5", 00:17:53.555 "assigned_rate_limits": { 00:17:53.555 "rw_ios_per_sec": 0, 00:17:53.555 "rw_mbytes_per_sec": 0, 00:17:53.555 "r_mbytes_per_sec": 0, 00:17:53.555 "w_mbytes_per_sec": 0 00:17:53.555 }, 00:17:53.555 "claimed": false, 00:17:53.555 "zoned": false, 00:17:53.555 "supported_io_types": { 00:17:53.555 "read": true, 00:17:53.555 "write": true, 00:17:53.555 "unmap": true, 00:17:53.555 "write_zeroes": true, 00:17:53.555 "flush": false, 00:17:53.555 "reset": true, 00:17:53.555 "compare": false, 00:17:53.555 "compare_and_write": false, 00:17:53.555 "abort": false, 00:17:53.555 "nvme_admin": false, 00:17:53.555 "nvme_io": false 00:17:53.555 }, 00:17:53.555 "driver_specific": { 00:17:53.555 "lvol": { 00:17:53.555 "lvol_store_uuid": "260f386d-ff0e-4aaa-b9cf-077fb7e7aac6", 00:17:53.555 "base_bdev": "nvme0n1", 00:17:53.555 "thin_provision": true, 00:17:53.555 "snapshot": false, 00:17:53.555 "clone": false, 00:17:53.555 "esnap_clone": false 00:17:53.555 } 00:17:53.555 } 00:17:53.555 } 00:17:53.555 ]' 00:17:53.555 04:58:00 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:17:53.555 04:58:00 -- common/autotest_common.sh@1362 -- # bs=4096 00:17:53.555 04:58:00 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:17:53.813 04:58:00 -- common/autotest_common.sh@1363 -- # nb=26476544 00:17:53.813 04:58:00 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:17:53.813 04:58:00 -- common/autotest_common.sh@1367 -- # echo 103424 00:17:53.813 04:58:00 -- ftl/common.sh@41 -- # local base_size=5171 00:17:53.813 04:58:00 -- ftl/common.sh@44 -- # local nvc_bdev 00:17:53.813 04:58:00 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:17:54.073 04:58:00 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:54.073 04:58:00 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:54.073 04:58:00 -- ftl/common.sh@48 -- # get_bdev_size 3fb15a5b-3b25-418f-a026-2d7da8f9c9c5 00:17:54.073 04:58:00 -- common/autotest_common.sh@1357 -- # local bdev_name=3fb15a5b-3b25-418f-a026-2d7da8f9c9c5 00:17:54.073 04:58:00 -- common/autotest_common.sh@1358 -- # local bdev_info 00:17:54.073 04:58:00 -- common/autotest_common.sh@1359 -- # local bs 00:17:54.073 04:58:00 -- common/autotest_common.sh@1360 -- # local nb 00:17:54.073 04:58:00 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3fb15a5b-3b25-418f-a026-2d7da8f9c9c5 00:17:54.331 04:58:01 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:17:54.331 { 00:17:54.331 "name": "3fb15a5b-3b25-418f-a026-2d7da8f9c9c5", 00:17:54.331 "aliases": [ 00:17:54.331 "lvs/nvme0n1p0" 00:17:54.331 ], 00:17:54.331 "product_name": "Logical Volume", 00:17:54.331 "block_size": 4096, 00:17:54.331 "num_blocks": 26476544, 00:17:54.331 "uuid": "3fb15a5b-3b25-418f-a026-2d7da8f9c9c5", 00:17:54.331 "assigned_rate_limits": { 00:17:54.331 "rw_ios_per_sec": 0, 00:17:54.331 "rw_mbytes_per_sec": 0, 00:17:54.331 "r_mbytes_per_sec": 0, 00:17:54.331 "w_mbytes_per_sec": 0 00:17:54.331 }, 00:17:54.331 "claimed": false, 00:17:54.331 "zoned": false, 00:17:54.331 "supported_io_types": { 00:17:54.331 "read": true, 00:17:54.331 "write": true, 00:17:54.331 "unmap": true, 00:17:54.331 "write_zeroes": true, 00:17:54.331 "flush": false, 00:17:54.331 "reset": true, 00:17:54.331 "compare": false, 00:17:54.331 "compare_and_write": false, 00:17:54.331 "abort": false, 00:17:54.331 "nvme_admin": false, 00:17:54.331 "nvme_io": false 00:17:54.331 }, 00:17:54.331 "driver_specific": { 00:17:54.331 "lvol": { 00:17:54.331 "lvol_store_uuid": "260f386d-ff0e-4aaa-b9cf-077fb7e7aac6", 00:17:54.331 "base_bdev": "nvme0n1", 00:17:54.331 "thin_provision": true, 00:17:54.331 "snapshot": false, 00:17:54.331 "clone": false, 00:17:54.331 "esnap_clone": false 00:17:54.331 } 00:17:54.331 } 00:17:54.331 } 00:17:54.331 ]' 00:17:54.331 04:58:01 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:17:54.331 04:58:01 -- common/autotest_common.sh@1362 -- # bs=4096 00:17:54.331 04:58:01 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:17:54.331 04:58:01 -- common/autotest_common.sh@1363 -- # nb=26476544 00:17:54.331 04:58:01 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:17:54.331 04:58:01 -- common/autotest_common.sh@1367 -- # echo 103424 00:17:54.331 04:58:01 -- ftl/common.sh@48 -- # cache_size=5171 00:17:54.331 04:58:01 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:54.589 04:58:01 -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:17:54.589 04:58:01 -- ftl/bdevperf.sh@26 -- # get_bdev_size 3fb15a5b-3b25-418f-a026-2d7da8f9c9c5 00:17:54.589 04:58:01 -- common/autotest_common.sh@1357 -- # local bdev_name=3fb15a5b-3b25-418f-a026-2d7da8f9c9c5 00:17:54.589 04:58:01 -- common/autotest_common.sh@1358 -- # local bdev_info 00:17:54.589 04:58:01 -- common/autotest_common.sh@1359 -- # local bs 00:17:54.589 04:58:01 -- common/autotest_common.sh@1360 -- # local nb 00:17:54.589 04:58:01 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3fb15a5b-3b25-418f-a026-2d7da8f9c9c5 00:17:54.847 04:58:01 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:17:54.847 { 00:17:54.847 "name": "3fb15a5b-3b25-418f-a026-2d7da8f9c9c5", 00:17:54.847 "aliases": [ 00:17:54.847 "lvs/nvme0n1p0" 00:17:54.847 ], 00:17:54.847 "product_name": "Logical Volume", 00:17:54.847 "block_size": 4096, 00:17:54.847 "num_blocks": 26476544, 00:17:54.847 "uuid": "3fb15a5b-3b25-418f-a026-2d7da8f9c9c5", 00:17:54.847 "assigned_rate_limits": { 00:17:54.847 "rw_ios_per_sec": 0, 00:17:54.847 "rw_mbytes_per_sec": 0, 00:17:54.847 "r_mbytes_per_sec": 0, 00:17:54.847 "w_mbytes_per_sec": 0 00:17:54.847 }, 00:17:54.847 "claimed": false, 00:17:54.847 "zoned": false, 00:17:54.847 "supported_io_types": { 00:17:54.847 "read": true, 00:17:54.847 "write": true, 00:17:54.847 "unmap": true, 00:17:54.847 "write_zeroes": true, 00:17:54.847 "flush": false, 00:17:54.847 "reset": true, 00:17:54.847 "compare": false, 00:17:54.847 "compare_and_write": false, 00:17:54.847 "abort": false, 00:17:54.847 "nvme_admin": false, 00:17:54.847 "nvme_io": false 00:17:54.847 }, 00:17:54.847 "driver_specific": { 00:17:54.847 "lvol": { 00:17:54.847 "lvol_store_uuid": "260f386d-ff0e-4aaa-b9cf-077fb7e7aac6", 00:17:54.847 "base_bdev": "nvme0n1", 00:17:54.847 "thin_provision": true, 00:17:54.847 "snapshot": false, 00:17:54.847 "clone": false, 00:17:54.847 "esnap_clone": false 00:17:54.847 } 00:17:54.847 } 00:17:54.847 } 00:17:54.847 ]' 00:17:54.847 04:58:01 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:17:54.847 04:58:01 -- common/autotest_common.sh@1362 -- # bs=4096 00:17:54.847 04:58:01 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:17:54.847 04:58:01 -- common/autotest_common.sh@1363 -- # nb=26476544 00:17:54.847 04:58:01 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:17:54.847 04:58:01 -- common/autotest_common.sh@1367 -- # echo 103424 00:17:54.847 04:58:01 -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:17:54.847 04:58:01 -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 3fb15a5b-3b25-418f-a026-2d7da8f9c9c5 -c nvc0n1p0 --l2p_dram_limit 20 00:17:55.106 [2024-05-12 04:58:02.092413] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.106 [2024-05-12 04:58:02.092464] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:55.106 [2024-05-12 04:58:02.092503] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:55.106 [2024-05-12 04:58:02.092515] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.106 [2024-05-12 04:58:02.092584] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.106 [2024-05-12 04:58:02.092601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:55.106 [2024-05-12 04:58:02.092614] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:17:55.106 [2024-05-12 04:58:02.092625] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.106 [2024-05-12 04:58:02.092651] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:55.106 [2024-05-12 04:58:02.093565] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:55.106 [2024-05-12 04:58:02.093600] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.106 [2024-05-12 04:58:02.093614] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:55.106 [2024-05-12 04:58:02.093628] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.952 ms 00:17:55.106 [2024-05-12 04:58:02.093639] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.106 [2024-05-12 04:58:02.093739] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 67a60a42-57de-4e50-99cd-a287ef345750 00:17:55.106 [2024-05-12 04:58:02.094741] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.106 [2024-05-12 04:58:02.094778] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:55.106 [2024-05-12 04:58:02.094810] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:55.106 [2024-05-12 04:58:02.094822] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.106 [2024-05-12 04:58:02.098919] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.106 [2024-05-12 04:58:02.098961] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:55.106 [2024-05-12 04:58:02.098993] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.055 ms 00:17:55.106 [2024-05-12 04:58:02.099008] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.106 [2024-05-12 04:58:02.099106] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.106 [2024-05-12 04:58:02.099127] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:55.106 [2024-05-12 04:58:02.099139] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:17:55.106 [2024-05-12 04:58:02.099155] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.106 [2024-05-12 04:58:02.099239] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.106 [2024-05-12 04:58:02.099296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:55.106 [2024-05-12 04:58:02.099310] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:17:55.106 [2024-05-12 04:58:02.099322] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.106 [2024-05-12 04:58:02.099372] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:55.106 [2024-05-12 04:58:02.103445] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.106 [2024-05-12 04:58:02.103482] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:55.106 [2024-05-12 04:58:02.103506] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.099 ms 00:17:55.106 [2024-05-12 04:58:02.103518] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.106 [2024-05-12 04:58:02.103559] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.106 [2024-05-12 04:58:02.103589] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:55.106 [2024-05-12 04:58:02.103603] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:55.107 [2024-05-12 04:58:02.103613] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.107 [2024-05-12 04:58:02.103665] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:55.107 [2024-05-12 04:58:02.103790] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:55.107 [2024-05-12 04:58:02.103813] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:55.107 [2024-05-12 04:58:02.103827] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:55.107 [2024-05-12 04:58:02.103842] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:55.107 [2024-05-12 04:58:02.103897] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:55.107 [2024-05-12 04:58:02.103928] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:55.107 [2024-05-12 04:58:02.103940] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:55.107 [2024-05-12 04:58:02.103954] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:55.107 [2024-05-12 04:58:02.103965] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:55.107 [2024-05-12 04:58:02.103978] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.107 [2024-05-12 04:58:02.103992] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:55.107 [2024-05-12 04:58:02.104005] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:17:55.107 [2024-05-12 04:58:02.104016] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.107 [2024-05-12 04:58:02.104087] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.107 [2024-05-12 04:58:02.104101] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:55.107 [2024-05-12 04:58:02.104115] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:17:55.107 [2024-05-12 04:58:02.104126] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.107 [2024-05-12 04:58:02.104207] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:55.107 [2024-05-12 04:58:02.104236] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:55.107 [2024-05-12 04:58:02.104267] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:55.107 [2024-05-12 04:58:02.104329] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.107 [2024-05-12 04:58:02.104345] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:55.107 [2024-05-12 04:58:02.104356] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:55.107 [2024-05-12 04:58:02.104367] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:55.107 [2024-05-12 04:58:02.104377] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:55.107 [2024-05-12 04:58:02.104401] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:55.107 [2024-05-12 04:58:02.104428] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:55.107 [2024-05-12 04:58:02.104439] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:55.107 [2024-05-12 04:58:02.104450] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:55.107 [2024-05-12 04:58:02.104461] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:55.107 [2024-05-12 04:58:02.104471] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:55.107 [2024-05-12 04:58:02.104482] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:17:55.107 [2024-05-12 04:58:02.104492] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.107 [2024-05-12 04:58:02.104506] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:55.107 [2024-05-12 04:58:02.104516] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:17:55.107 [2024-05-12 04:58:02.104527] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.107 [2024-05-12 04:58:02.104537] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:55.107 [2024-05-12 04:58:02.104549] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:17:55.107 [2024-05-12 04:58:02.104560] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:55.107 [2024-05-12 04:58:02.104572] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:55.107 [2024-05-12 04:58:02.104581] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:55.107 [2024-05-12 04:58:02.104592] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:55.107 [2024-05-12 04:58:02.104602] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:55.107 [2024-05-12 04:58:02.104613] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:17:55.107 [2024-05-12 04:58:02.104622] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:55.107 [2024-05-12 04:58:02.104804] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:55.107 [2024-05-12 04:58:02.104817] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:55.107 [2024-05-12 04:58:02.104830] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:55.107 [2024-05-12 04:58:02.104840] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:55.107 [2024-05-12 04:58:02.104853] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:17:55.107 [2024-05-12 04:58:02.104863] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:55.107 [2024-05-12 04:58:02.104876] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:55.107 [2024-05-12 04:58:02.104886] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:55.107 [2024-05-12 04:58:02.104898] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:55.107 [2024-05-12 04:58:02.104908] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:55.107 [2024-05-12 04:58:02.104919] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:17:55.107 [2024-05-12 04:58:02.104929] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:55.107 [2024-05-12 04:58:02.104940] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:55.107 [2024-05-12 04:58:02.104952] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:55.107 [2024-05-12 04:58:02.104964] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:55.107 [2024-05-12 04:58:02.104975] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.107 [2024-05-12 04:58:02.104988] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:55.107 [2024-05-12 04:58:02.104998] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:55.107 [2024-05-12 04:58:02.105010] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:55.107 [2024-05-12 04:58:02.105020] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:55.107 [2024-05-12 04:58:02.105033] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:55.107 [2024-05-12 04:58:02.105044] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:55.107 [2024-05-12 04:58:02.105057] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:55.107 [2024-05-12 04:58:02.105072] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:55.107 [2024-05-12 04:58:02.105087] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:55.107 [2024-05-12 04:58:02.105098] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:17:55.107 [2024-05-12 04:58:02.105111] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:17:55.107 [2024-05-12 04:58:02.105137] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:17:55.107 [2024-05-12 04:58:02.105150] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:17:55.107 [2024-05-12 04:58:02.105175] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:17:55.107 [2024-05-12 04:58:02.105189] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:17:55.107 [2024-05-12 04:58:02.105200] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:17:55.107 [2024-05-12 04:58:02.105212] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:17:55.107 [2024-05-12 04:58:02.105223] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:17:55.107 [2024-05-12 04:58:02.105237] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:17:55.107 [2024-05-12 04:58:02.105248] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:17:55.107 [2024-05-12 04:58:02.105263] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:17:55.107 [2024-05-12 04:58:02.105274] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:55.107 [2024-05-12 04:58:02.105288] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:55.107 [2024-05-12 04:58:02.105302] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:55.107 [2024-05-12 04:58:02.105316] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:55.107 [2024-05-12 04:58:02.105346] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:55.107 [2024-05-12 04:58:02.105360] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:55.107 [2024-05-12 04:58:02.105373] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.107 [2024-05-12 04:58:02.105386] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:55.107 [2024-05-12 04:58:02.105398] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.213 ms 00:17:55.107 [2024-05-12 04:58:02.105410] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.107 [2024-05-12 04:58:02.121684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.107 [2024-05-12 04:58:02.121727] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:55.107 [2024-05-12 04:58:02.121761] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.229 ms 00:17:55.107 [2024-05-12 04:58:02.121773] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.107 [2024-05-12 04:58:02.121859] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.107 [2024-05-12 04:58:02.121877] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:55.107 [2024-05-12 04:58:02.121889] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:17:55.108 [2024-05-12 04:58:02.121901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.108 [2024-05-12 04:58:02.172028] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.108 [2024-05-12 04:58:02.172082] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:55.108 [2024-05-12 04:58:02.172102] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.072 ms 00:17:55.108 [2024-05-12 04:58:02.172116] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.108 [2024-05-12 04:58:02.172163] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.108 [2024-05-12 04:58:02.172181] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:55.108 [2024-05-12 04:58:02.172209] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:55.108 [2024-05-12 04:58:02.172236] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.108 [2024-05-12 04:58:02.172707] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.108 [2024-05-12 04:58:02.172736] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:55.108 [2024-05-12 04:58:02.172754] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.350 ms 00:17:55.108 [2024-05-12 04:58:02.172768] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.108 [2024-05-12 04:58:02.172926] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.108 [2024-05-12 04:58:02.172962] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:55.108 [2024-05-12 04:58:02.172978] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:17:55.108 [2024-05-12 04:58:02.172991] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.108 [2024-05-12 04:58:02.188078] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.108 [2024-05-12 04:58:02.188123] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:55.108 [2024-05-12 04:58:02.188142] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.066 ms 00:17:55.108 [2024-05-12 04:58:02.188155] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.108 [2024-05-12 04:58:02.200089] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:17:55.108 [2024-05-12 04:58:02.204962] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.108 [2024-05-12 04:58:02.205010] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:55.108 [2024-05-12 04:58:02.205044] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.653 ms 00:17:55.108 [2024-05-12 04:58:02.205055] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.364 [2024-05-12 04:58:02.272207] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.364 [2024-05-12 04:58:02.272302] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:55.364 [2024-05-12 04:58:02.272341] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.098 ms 00:17:55.364 [2024-05-12 04:58:02.272352] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.364 [2024-05-12 04:58:02.272423] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:17:55.364 [2024-05-12 04:58:02.272442] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:17:57.891 [2024-05-12 04:58:04.751821] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.891 [2024-05-12 04:58:04.751960] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:57.891 [2024-05-12 04:58:04.751987] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2479.410 ms 00:17:57.891 [2024-05-12 04:58:04.752000] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.891 [2024-05-12 04:58:04.752208] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.891 [2024-05-12 04:58:04.752249] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:57.891 [2024-05-12 04:58:04.752281] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:17:57.891 [2024-05-12 04:58:04.752307] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.891 [2024-05-12 04:58:04.780087] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.891 [2024-05-12 04:58:04.780127] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:57.891 [2024-05-12 04:58:04.780164] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.719 ms 00:17:57.891 [2024-05-12 04:58:04.780176] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.891 [2024-05-12 04:58:04.807282] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.891 [2024-05-12 04:58:04.807317] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:57.891 [2024-05-12 04:58:04.807354] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.027 ms 00:17:57.891 [2024-05-12 04:58:04.807365] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.891 [2024-05-12 04:58:04.807697] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.891 [2024-05-12 04:58:04.807715] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:57.891 [2024-05-12 04:58:04.807728] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:17:57.891 [2024-05-12 04:58:04.807741] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.891 [2024-05-12 04:58:04.881459] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.891 [2024-05-12 04:58:04.881505] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:57.891 [2024-05-12 04:58:04.881559] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.663 ms 00:17:57.891 [2024-05-12 04:58:04.881572] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.891 [2024-05-12 04:58:04.913966] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.891 [2024-05-12 04:58:04.914008] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:57.891 [2024-05-12 04:58:04.914043] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.346 ms 00:17:57.891 [2024-05-12 04:58:04.914055] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.891 [2024-05-12 04:58:04.916163] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.891 [2024-05-12 04:58:04.916204] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:57.891 [2024-05-12 04:58:04.916285] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.064 ms 00:17:57.891 [2024-05-12 04:58:04.916312] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.891 [2024-05-12 04:58:04.945830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.891 [2024-05-12 04:58:04.945883] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:57.891 [2024-05-12 04:58:04.945920] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.457 ms 00:17:57.891 [2024-05-12 04:58:04.945931] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.891 [2024-05-12 04:58:04.945981] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.891 [2024-05-12 04:58:04.945998] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:57.891 [2024-05-12 04:58:04.946012] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:57.891 [2024-05-12 04:58:04.946024] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.891 [2024-05-12 04:58:04.946135] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.891 [2024-05-12 04:58:04.946153] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:57.891 [2024-05-12 04:58:04.946167] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:17:57.891 [2024-05-12 04:58:04.946178] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.891 [2024-05-12 04:58:04.947285] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2854.368 ms, result 0 00:17:57.891 { 00:17:57.891 "name": "ftl0", 00:17:57.891 "uuid": "67a60a42-57de-4e50-99cd-a287ef345750" 00:17:57.891 } 00:17:57.891 04:58:04 -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:17:57.891 04:58:04 -- ftl/bdevperf.sh@29 -- # jq -r .name 00:17:57.891 04:58:04 -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:17:58.150 04:58:05 -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:17:58.408 [2024-05-12 04:58:05.351558] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:58.408 I/O size of 69632 is greater than zero copy threshold (65536). 00:17:58.408 Zero copy mechanism will not be used. 00:17:58.408 Running I/O for 4 seconds... 00:18:02.595 00:18:02.595 Latency(us) 00:18:02.595 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.595 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:18:02.595 ftl0 : 4.00 1752.81 116.40 0.00 0.00 595.50 240.17 1139.43 00:18:02.595 =================================================================================================================== 00:18:02.595 Total : 1752.81 116.40 0.00 0.00 595.50 240.17 1139.43 00:18:02.595 [2024-05-12 04:58:09.360724] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:02.595 0 00:18:02.595 04:58:09 -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:18:02.595 [2024-05-12 04:58:09.506986] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:02.595 Running I/O for 4 seconds... 00:18:06.781 00:18:06.781 Latency(us) 00:18:06.781 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:06.781 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:18:06.781 ftl0 : 4.02 7788.73 30.42 0.00 0.00 16392.29 294.17 40751.48 00:18:06.781 =================================================================================================================== 00:18:06.781 Total : 7788.73 30.42 0.00 0.00 16392.29 0.00 40751.48 00:18:06.781 [2024-05-12 04:58:13.534129] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:06.781 0 00:18:06.781 04:58:13 -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:18:06.781 [2024-05-12 04:58:13.658052] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:06.781 Running I/O for 4 seconds... 00:18:10.971 00:18:10.971 Latency(us) 00:18:10.971 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.971 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:10.971 Verification LBA range: start 0x0 length 0x1400000 00:18:10.971 ftl0 : 4.01 8233.47 32.16 0.00 0.00 15502.67 233.66 24069.59 00:18:10.971 =================================================================================================================== 00:18:10.971 Total : 8233.47 32.16 0.00 0.00 15502.67 0.00 24069.59 00:18:10.971 0 00:18:10.971 [2024-05-12 04:58:17.682879] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:10.971 04:58:17 -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:18:10.971 [2024-05-12 04:58:17.939241] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.972 [2024-05-12 04:58:17.939343] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:10.972 [2024-05-12 04:58:17.939385] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:10.972 [2024-05-12 04:58:17.939397] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.972 [2024-05-12 04:58:17.939432] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:10.972 [2024-05-12 04:58:17.942783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.972 [2024-05-12 04:58:17.942822] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:10.972 [2024-05-12 04:58:17.942870] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.326 ms 00:18:10.972 [2024-05-12 04:58:17.942889] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.972 [2024-05-12 04:58:17.944583] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.972 [2024-05-12 04:58:17.944675] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:10.972 [2024-05-12 04:58:17.944693] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.650 ms 00:18:10.972 [2024-05-12 04:58:17.944706] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.231 [2024-05-12 04:58:18.120700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.231 [2024-05-12 04:58:18.120783] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:11.231 [2024-05-12 04:58:18.120805] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 175.970 ms 00:18:11.232 [2024-05-12 04:58:18.120819] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.232 [2024-05-12 04:58:18.127172] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.232 [2024-05-12 04:58:18.127226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:18:11.232 [2024-05-12 04:58:18.127287] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.292 ms 00:18:11.232 [2024-05-12 04:58:18.127301] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.232 [2024-05-12 04:58:18.160677] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.232 [2024-05-12 04:58:18.160739] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:11.232 [2024-05-12 04:58:18.160757] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.280 ms 00:18:11.232 [2024-05-12 04:58:18.160772] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.232 [2024-05-12 04:58:18.179034] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.232 [2024-05-12 04:58:18.179085] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:11.232 [2024-05-12 04:58:18.179118] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.218 ms 00:18:11.232 [2024-05-12 04:58:18.179131] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.232 [2024-05-12 04:58:18.179381] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.232 [2024-05-12 04:58:18.179409] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:11.232 [2024-05-12 04:58:18.179426] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:18:11.232 [2024-05-12 04:58:18.179439] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.232 [2024-05-12 04:58:18.208782] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.232 [2024-05-12 04:58:18.208828] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:11.232 [2024-05-12 04:58:18.208877] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.321 ms 00:18:11.232 [2024-05-12 04:58:18.208890] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.232 [2024-05-12 04:58:18.237467] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.232 [2024-05-12 04:58:18.237526] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:11.232 [2024-05-12 04:58:18.237542] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.534 ms 00:18:11.232 [2024-05-12 04:58:18.237556] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.232 [2024-05-12 04:58:18.264858] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.232 [2024-05-12 04:58:18.264918] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:11.232 [2024-05-12 04:58:18.264935] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.246 ms 00:18:11.232 [2024-05-12 04:58:18.264947] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.232 [2024-05-12 04:58:18.292012] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.232 [2024-05-12 04:58:18.292073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:11.232 [2024-05-12 04:58:18.292091] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.969 ms 00:18:11.232 [2024-05-12 04:58:18.292103] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.232 [2024-05-12 04:58:18.292144] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:11.232 [2024-05-12 04:58:18.292170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:11.232 [2024-05-12 04:58:18.292183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:11.232 [2024-05-12 04:58:18.292197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:11.232 [2024-05-12 04:58:18.292208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:11.232 [2024-05-12 04:58:18.292271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:11.232 [2024-05-12 04:58:18.292285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:11.232 [2024-05-12 04:58:18.292300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:11.232 [2024-05-12 04:58:18.292326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:11.232 [2024-05-12 04:58:18.292339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:11.232 [2024-05-12 04:58:18.292350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:11.232 [2024-05-12 04:58:18.292363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:11.232 [2024-05-12 04:58:18.292373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:11.232 [2024-05-12 04:58:18.292385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:11.232 [2024-05-12 04:58:18.292396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:11.232 [2024-05-12 04:58:18.292407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:11.232 [2024-05-12 04:58:18.292418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:11.232 [2024-05-12 04:58:18.292430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:11.232 [2024-05-12 04:58:18.292440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:11.232 [2024-05-12 04:58:18.292454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:11.232 [2024-05-12 04:58:18.292464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:11.232 [2024-05-12 04:58:18.292476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:11.232 [2024-05-12 04:58:18.292487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:11.232 [2024-05-12 04:58:18.292502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:11.232 [2024-05-12 04:58:18.292529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:11.232 [2024-05-12 04:58:18.292558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:11.232 [2024-05-12 04:58:18.292569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:11.232 [2024-05-12 04:58:18.292598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:11.232 [2024-05-12 04:58:18.292615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:11.232 [2024-05-12 04:58:18.292629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:11.232 [2024-05-12 04:58:18.292655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:11.232 [2024-05-12 04:58:18.292668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:11.232 [2024-05-12 04:58:18.292680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:11.232 [2024-05-12 04:58:18.292693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:11.232 [2024-05-12 04:58:18.292705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:11.232 [2024-05-12 04:58:18.292718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:11.232 [2024-05-12 04:58:18.292729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:11.232 [2024-05-12 04:58:18.292742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:11.232 [2024-05-12 04:58:18.292753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:11.232 [2024-05-12 04:58:18.292767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:11.232 [2024-05-12 04:58:18.292779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:11.232 [2024-05-12 04:58:18.292792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:11.232 [2024-05-12 04:58:18.292803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:11.232 [2024-05-12 04:58:18.292815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:11.232 [2024-05-12 04:58:18.292826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:11.232 [2024-05-12 04:58:18.292839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:11.232 [2024-05-12 04:58:18.292866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:11.232 [2024-05-12 04:58:18.292879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:11.232 [2024-05-12 04:58:18.292891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:11.232 [2024-05-12 04:58:18.292906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:11.232 [2024-05-12 04:58:18.292930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:11.232 [2024-05-12 04:58:18.292944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:11.232 [2024-05-12 04:58:18.292956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:11.232 [2024-05-12 04:58:18.292969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:11.233 [2024-05-12 04:58:18.292981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:11.233 [2024-05-12 04:58:18.292997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:11.233 [2024-05-12 04:58:18.293008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:11.233 [2024-05-12 04:58:18.293022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:11.233 [2024-05-12 04:58:18.293033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:11.233 [2024-05-12 04:58:18.293047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:11.233 [2024-05-12 04:58:18.293058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:11.233 [2024-05-12 04:58:18.293072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:11.233 [2024-05-12 04:58:18.293083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:11.233 [2024-05-12 04:58:18.293097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:11.233 [2024-05-12 04:58:18.293109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:11.233 [2024-05-12 04:58:18.293138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:11.233 [2024-05-12 04:58:18.293150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:11.233 [2024-05-12 04:58:18.293164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:11.233 [2024-05-12 04:58:18.293176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:11.233 [2024-05-12 04:58:18.293189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:11.233 [2024-05-12 04:58:18.293201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:11.233 [2024-05-12 04:58:18.293217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:11.233 [2024-05-12 04:58:18.293229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:11.233 [2024-05-12 04:58:18.293244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:11.233 [2024-05-12 04:58:18.293256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:11.233 [2024-05-12 04:58:18.293270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:11.233 [2024-05-12 04:58:18.293282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:11.233 [2024-05-12 04:58:18.293296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:11.233 [2024-05-12 04:58:18.293307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:11.233 [2024-05-12 04:58:18.293321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:11.233 [2024-05-12 04:58:18.293352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:11.233 [2024-05-12 04:58:18.293370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:11.233 [2024-05-12 04:58:18.293382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:11.233 [2024-05-12 04:58:18.293396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:11.233 [2024-05-12 04:58:18.293408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:11.233 [2024-05-12 04:58:18.293422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:11.233 [2024-05-12 04:58:18.293434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:11.233 [2024-05-12 04:58:18.293450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:11.233 [2024-05-12 04:58:18.293476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:11.233 [2024-05-12 04:58:18.293490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:11.233 [2024-05-12 04:58:18.293502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:11.233 [2024-05-12 04:58:18.293530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:11.233 [2024-05-12 04:58:18.293541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:11.233 [2024-05-12 04:58:18.293554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:11.233 [2024-05-12 04:58:18.293566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:11.233 [2024-05-12 04:58:18.293580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:11.233 [2024-05-12 04:58:18.293592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:11.233 [2024-05-12 04:58:18.293605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:11.233 [2024-05-12 04:58:18.293617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:11.233 [2024-05-12 04:58:18.293630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:11.233 [2024-05-12 04:58:18.293641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:11.233 [2024-05-12 04:58:18.293664] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:11.233 [2024-05-12 04:58:18.293676] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 67a60a42-57de-4e50-99cd-a287ef345750 00:18:11.233 [2024-05-12 04:58:18.293692] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:11.233 [2024-05-12 04:58:18.293703] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:11.233 [2024-05-12 04:58:18.293715] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:11.233 [2024-05-12 04:58:18.293727] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:11.233 [2024-05-12 04:58:18.293739] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:11.233 [2024-05-12 04:58:18.293750] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:11.233 [2024-05-12 04:58:18.293763] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:11.233 [2024-05-12 04:58:18.293773] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:11.233 [2024-05-12 04:58:18.293785] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:11.233 [2024-05-12 04:58:18.293796] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.233 [2024-05-12 04:58:18.293812] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:11.233 [2024-05-12 04:58:18.293824] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.654 ms 00:18:11.233 [2024-05-12 04:58:18.293837] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.233 [2024-05-12 04:58:18.308944] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.233 [2024-05-12 04:58:18.309000] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:11.233 [2024-05-12 04:58:18.309017] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.036 ms 00:18:11.233 [2024-05-12 04:58:18.309033] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.233 [2024-05-12 04:58:18.309320] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.233 [2024-05-12 04:58:18.309342] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:11.233 [2024-05-12 04:58:18.309356] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:18:11.233 [2024-05-12 04:58:18.309369] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.233 [2024-05-12 04:58:18.355540] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:11.233 [2024-05-12 04:58:18.355611] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:11.233 [2024-05-12 04:58:18.355630] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:11.233 [2024-05-12 04:58:18.355643] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.233 [2024-05-12 04:58:18.355707] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:11.233 [2024-05-12 04:58:18.355725] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:11.233 [2024-05-12 04:58:18.355736] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:11.233 [2024-05-12 04:58:18.355763] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.233 [2024-05-12 04:58:18.355899] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:11.233 [2024-05-12 04:58:18.355926] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:11.233 [2024-05-12 04:58:18.355940] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:11.233 [2024-05-12 04:58:18.355956] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.233 [2024-05-12 04:58:18.355982] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:11.233 [2024-05-12 04:58:18.356005] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:11.233 [2024-05-12 04:58:18.356018] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:11.233 [2024-05-12 04:58:18.356031] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.499 [2024-05-12 04:58:18.445729] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:11.499 [2024-05-12 04:58:18.445814] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:11.499 [2024-05-12 04:58:18.445832] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:11.499 [2024-05-12 04:58:18.445861] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.499 [2024-05-12 04:58:18.480317] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:11.500 [2024-05-12 04:58:18.480377] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:11.500 [2024-05-12 04:58:18.480410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:11.500 [2024-05-12 04:58:18.480423] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.500 [2024-05-12 04:58:18.480496] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:11.500 [2024-05-12 04:58:18.480517] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:11.500 [2024-05-12 04:58:18.480529] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:11.500 [2024-05-12 04:58:18.480544] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.500 [2024-05-12 04:58:18.480610] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:11.500 [2024-05-12 04:58:18.480661] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:11.500 [2024-05-12 04:58:18.480677] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:11.500 [2024-05-12 04:58:18.480691] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.500 [2024-05-12 04:58:18.480807] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:11.500 [2024-05-12 04:58:18.480830] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:11.500 [2024-05-12 04:58:18.480844] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:11.500 [2024-05-12 04:58:18.480858] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.500 [2024-05-12 04:58:18.480910] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:11.500 [2024-05-12 04:58:18.480938] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:11.500 [2024-05-12 04:58:18.480952] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:11.500 [2024-05-12 04:58:18.480969] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.500 [2024-05-12 04:58:18.481013] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:11.500 [2024-05-12 04:58:18.481031] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:11.500 [2024-05-12 04:58:18.481044] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:11.500 [2024-05-12 04:58:18.481060] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.500 [2024-05-12 04:58:18.481112] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:11.500 [2024-05-12 04:58:18.481131] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:11.500 [2024-05-12 04:58:18.481162] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:11.500 [2024-05-12 04:58:18.481176] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.500 [2024-05-12 04:58:18.481333] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 542.075 ms, result 0 00:18:11.500 true 00:18:11.500 04:58:18 -- ftl/bdevperf.sh@37 -- # killprocess 72873 00:18:11.500 04:58:18 -- common/autotest_common.sh@926 -- # '[' -z 72873 ']' 00:18:11.500 04:58:18 -- common/autotest_common.sh@930 -- # kill -0 72873 00:18:11.500 04:58:18 -- common/autotest_common.sh@931 -- # uname 00:18:11.500 04:58:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:11.500 04:58:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72873 00:18:11.500 killing process with pid 72873 00:18:11.500 Received shutdown signal, test time was about 4.000000 seconds 00:18:11.500 00:18:11.500 Latency(us) 00:18:11.500 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.500 =================================================================================================================== 00:18:11.500 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:11.500 04:58:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:11.500 04:58:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:11.500 04:58:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72873' 00:18:11.500 04:58:18 -- common/autotest_common.sh@945 -- # kill 72873 00:18:11.500 04:58:18 -- common/autotest_common.sh@950 -- # wait 72873 00:18:15.718 04:58:22 -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:18:15.718 04:58:22 -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:18:15.718 04:58:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:15.718 04:58:22 -- common/autotest_common.sh@10 -- # set +x 00:18:15.718 Remove shared memory files 00:18:15.718 04:58:22 -- ftl/bdevperf.sh@41 -- # remove_shm 00:18:15.718 04:58:22 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:15.718 04:58:22 -- ftl/common.sh@205 -- # rm -f rm -f 00:18:15.718 04:58:22 -- ftl/common.sh@206 -- # rm -f rm -f 00:18:15.718 04:58:22 -- ftl/common.sh@207 -- # rm -f rm -f 00:18:15.718 04:58:22 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:15.718 04:58:22 -- ftl/common.sh@209 -- # rm -f rm -f 00:18:15.718 ************************************ 00:18:15.718 END TEST ftl_bdevperf 00:18:15.718 ************************************ 00:18:15.718 00:18:15.718 real 0m24.520s 00:18:15.718 user 0m27.725s 00:18:15.718 sys 0m1.118s 00:18:15.718 04:58:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:15.718 04:58:22 -- common/autotest_common.sh@10 -- # set +x 00:18:15.718 04:58:22 -- ftl/ftl.sh@76 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:07.0 0000:00:06.0 00:18:15.718 04:58:22 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:18:15.718 04:58:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:15.718 04:58:22 -- common/autotest_common.sh@10 -- # set +x 00:18:15.718 ************************************ 00:18:15.718 START TEST ftl_trim 00:18:15.718 ************************************ 00:18:15.718 04:58:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:07.0 0000:00:06.0 00:18:15.718 * Looking for test storage... 00:18:15.718 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:15.718 04:58:22 -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:15.718 04:58:22 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:18:15.718 04:58:22 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:15.718 04:58:22 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:15.718 04:58:22 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:15.718 04:58:22 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:15.718 04:58:22 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:15.718 04:58:22 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:15.718 04:58:22 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:15.718 04:58:22 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:15.718 04:58:22 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:15.718 04:58:22 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:15.718 04:58:22 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:15.718 04:58:22 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:15.718 04:58:22 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:15.718 04:58:22 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:15.718 04:58:22 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:15.718 04:58:22 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:15.718 04:58:22 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:15.718 04:58:22 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:15.718 04:58:22 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:15.718 04:58:22 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:15.718 04:58:22 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:15.718 04:58:22 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:15.718 04:58:22 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:15.718 04:58:22 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:15.718 04:58:22 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:15.718 04:58:22 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:15.718 04:58:22 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:15.718 04:58:22 -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:15.718 04:58:22 -- ftl/trim.sh@23 -- # device=0000:00:07.0 00:18:15.718 04:58:22 -- ftl/trim.sh@24 -- # cache_device=0000:00:06.0 00:18:15.718 04:58:22 -- ftl/trim.sh@25 -- # timeout=240 00:18:15.718 04:58:22 -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:18:15.718 04:58:22 -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:18:15.718 04:58:22 -- ftl/trim.sh@29 -- # [[ y != y ]] 00:18:15.718 04:58:22 -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:18:15.718 04:58:22 -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:18:15.718 04:58:22 -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:15.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.718 04:58:22 -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:15.718 04:58:22 -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:18:15.718 04:58:22 -- ftl/trim.sh@40 -- # svcpid=73232 00:18:15.718 04:58:22 -- ftl/trim.sh@41 -- # waitforlisten 73232 00:18:15.718 04:58:22 -- common/autotest_common.sh@819 -- # '[' -z 73232 ']' 00:18:15.718 04:58:22 -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:18:15.718 04:58:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.718 04:58:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:15.718 04:58:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.718 04:58:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:15.718 04:58:22 -- common/autotest_common.sh@10 -- # set +x 00:18:15.718 [2024-05-12 04:58:22.392211] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:18:15.718 [2024-05-12 04:58:22.392426] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73232 ] 00:18:15.718 [2024-05-12 04:58:22.555831] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:15.718 [2024-05-12 04:58:22.728035] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:15.718 [2024-05-12 04:58:22.728662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:15.718 [2024-05-12 04:58:22.728740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.718 [2024-05-12 04:58:22.728747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:17.092 04:58:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:17.092 04:58:24 -- common/autotest_common.sh@852 -- # return 0 00:18:17.092 04:58:24 -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:18:17.092 04:58:24 -- ftl/common.sh@54 -- # local name=nvme0 00:18:17.092 04:58:24 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:18:17.092 04:58:24 -- ftl/common.sh@56 -- # local size=103424 00:18:17.092 04:58:24 -- ftl/common.sh@59 -- # local base_bdev 00:18:17.092 04:58:24 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:18:17.350 04:58:24 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:17.350 04:58:24 -- ftl/common.sh@62 -- # local base_size 00:18:17.350 04:58:24 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:17.350 04:58:24 -- common/autotest_common.sh@1357 -- # local bdev_name=nvme0n1 00:18:17.350 04:58:24 -- common/autotest_common.sh@1358 -- # local bdev_info 00:18:17.350 04:58:24 -- common/autotest_common.sh@1359 -- # local bs 00:18:17.350 04:58:24 -- common/autotest_common.sh@1360 -- # local nb 00:18:17.350 04:58:24 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:17.608 04:58:24 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:18:17.608 { 00:18:17.608 "name": "nvme0n1", 00:18:17.608 "aliases": [ 00:18:17.608 "d9837416-410b-4131-bbc2-39db0ce58d19" 00:18:17.608 ], 00:18:17.608 "product_name": "NVMe disk", 00:18:17.608 "block_size": 4096, 00:18:17.608 "num_blocks": 1310720, 00:18:17.609 "uuid": "d9837416-410b-4131-bbc2-39db0ce58d19", 00:18:17.609 "assigned_rate_limits": { 00:18:17.609 "rw_ios_per_sec": 0, 00:18:17.609 "rw_mbytes_per_sec": 0, 00:18:17.609 "r_mbytes_per_sec": 0, 00:18:17.609 "w_mbytes_per_sec": 0 00:18:17.609 }, 00:18:17.609 "claimed": true, 00:18:17.609 "claim_type": "read_many_write_one", 00:18:17.609 "zoned": false, 00:18:17.609 "supported_io_types": { 00:18:17.609 "read": true, 00:18:17.609 "write": true, 00:18:17.609 "unmap": true, 00:18:17.609 "write_zeroes": true, 00:18:17.609 "flush": true, 00:18:17.609 "reset": true, 00:18:17.609 "compare": true, 00:18:17.609 "compare_and_write": false, 00:18:17.609 "abort": true, 00:18:17.609 "nvme_admin": true, 00:18:17.609 "nvme_io": true 00:18:17.609 }, 00:18:17.609 "driver_specific": { 00:18:17.609 "nvme": [ 00:18:17.609 { 00:18:17.609 "pci_address": "0000:00:07.0", 00:18:17.609 "trid": { 00:18:17.609 "trtype": "PCIe", 00:18:17.609 "traddr": "0000:00:07.0" 00:18:17.609 }, 00:18:17.609 "ctrlr_data": { 00:18:17.609 "cntlid": 0, 00:18:17.609 "vendor_id": "0x1b36", 00:18:17.609 "model_number": "QEMU NVMe Ctrl", 00:18:17.609 "serial_number": "12341", 00:18:17.609 "firmware_revision": "8.0.0", 00:18:17.609 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:17.609 "oacs": { 00:18:17.609 "security": 0, 00:18:17.609 "format": 1, 00:18:17.609 "firmware": 0, 00:18:17.609 "ns_manage": 1 00:18:17.609 }, 00:18:17.609 "multi_ctrlr": false, 00:18:17.609 "ana_reporting": false 00:18:17.609 }, 00:18:17.609 "vs": { 00:18:17.609 "nvme_version": "1.4" 00:18:17.609 }, 00:18:17.609 "ns_data": { 00:18:17.609 "id": 1, 00:18:17.609 "can_share": false 00:18:17.609 } 00:18:17.609 } 00:18:17.609 ], 00:18:17.609 "mp_policy": "active_passive" 00:18:17.609 } 00:18:17.609 } 00:18:17.609 ]' 00:18:17.609 04:58:24 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:18:17.609 04:58:24 -- common/autotest_common.sh@1362 -- # bs=4096 00:18:17.609 04:58:24 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:18:17.609 04:58:24 -- common/autotest_common.sh@1363 -- # nb=1310720 00:18:17.609 04:58:24 -- common/autotest_common.sh@1366 -- # bdev_size=5120 00:18:17.609 04:58:24 -- common/autotest_common.sh@1367 -- # echo 5120 00:18:17.867 04:58:24 -- ftl/common.sh@63 -- # base_size=5120 00:18:17.867 04:58:24 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:17.867 04:58:24 -- ftl/common.sh@67 -- # clear_lvols 00:18:17.867 04:58:24 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:17.867 04:58:24 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:18.126 04:58:24 -- ftl/common.sh@28 -- # stores=260f386d-ff0e-4aaa-b9cf-077fb7e7aac6 00:18:18.126 04:58:24 -- ftl/common.sh@29 -- # for lvs in $stores 00:18:18.126 04:58:24 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 260f386d-ff0e-4aaa-b9cf-077fb7e7aac6 00:18:18.384 04:58:25 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:18.384 04:58:25 -- ftl/common.sh@68 -- # lvs=aee12a09-dab9-45ac-a5c8-31795c6f7c43 00:18:18.384 04:58:25 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u aee12a09-dab9-45ac-a5c8-31795c6f7c43 00:18:18.644 04:58:25 -- ftl/trim.sh@43 -- # split_bdev=7a3c5216-d07c-4e42-b0f5-00065d9ff00f 00:18:18.644 04:58:25 -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:06.0 7a3c5216-d07c-4e42-b0f5-00065d9ff00f 00:18:18.644 04:58:25 -- ftl/common.sh@35 -- # local name=nvc0 00:18:18.644 04:58:25 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:18:18.644 04:58:25 -- ftl/common.sh@37 -- # local base_bdev=7a3c5216-d07c-4e42-b0f5-00065d9ff00f 00:18:18.644 04:58:25 -- ftl/common.sh@38 -- # local cache_size= 00:18:18.644 04:58:25 -- ftl/common.sh@41 -- # get_bdev_size 7a3c5216-d07c-4e42-b0f5-00065d9ff00f 00:18:18.644 04:58:25 -- common/autotest_common.sh@1357 -- # local bdev_name=7a3c5216-d07c-4e42-b0f5-00065d9ff00f 00:18:18.644 04:58:25 -- common/autotest_common.sh@1358 -- # local bdev_info 00:18:18.644 04:58:25 -- common/autotest_common.sh@1359 -- # local bs 00:18:18.644 04:58:25 -- common/autotest_common.sh@1360 -- # local nb 00:18:18.644 04:58:25 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7a3c5216-d07c-4e42-b0f5-00065d9ff00f 00:18:18.903 04:58:25 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:18:18.903 { 00:18:18.903 "name": "7a3c5216-d07c-4e42-b0f5-00065d9ff00f", 00:18:18.903 "aliases": [ 00:18:18.903 "lvs/nvme0n1p0" 00:18:18.903 ], 00:18:18.903 "product_name": "Logical Volume", 00:18:18.903 "block_size": 4096, 00:18:18.903 "num_blocks": 26476544, 00:18:18.903 "uuid": "7a3c5216-d07c-4e42-b0f5-00065d9ff00f", 00:18:18.903 "assigned_rate_limits": { 00:18:18.903 "rw_ios_per_sec": 0, 00:18:18.903 "rw_mbytes_per_sec": 0, 00:18:18.903 "r_mbytes_per_sec": 0, 00:18:18.903 "w_mbytes_per_sec": 0 00:18:18.903 }, 00:18:18.903 "claimed": false, 00:18:18.903 "zoned": false, 00:18:18.903 "supported_io_types": { 00:18:18.903 "read": true, 00:18:18.903 "write": true, 00:18:18.903 "unmap": true, 00:18:18.903 "write_zeroes": true, 00:18:18.903 "flush": false, 00:18:18.903 "reset": true, 00:18:18.903 "compare": false, 00:18:18.903 "compare_and_write": false, 00:18:18.903 "abort": false, 00:18:18.903 "nvme_admin": false, 00:18:18.903 "nvme_io": false 00:18:18.903 }, 00:18:18.903 "driver_specific": { 00:18:18.903 "lvol": { 00:18:18.903 "lvol_store_uuid": "aee12a09-dab9-45ac-a5c8-31795c6f7c43", 00:18:18.903 "base_bdev": "nvme0n1", 00:18:18.903 "thin_provision": true, 00:18:18.903 "snapshot": false, 00:18:18.903 "clone": false, 00:18:18.903 "esnap_clone": false 00:18:18.903 } 00:18:18.903 } 00:18:18.903 } 00:18:18.903 ]' 00:18:18.903 04:58:25 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:18:18.903 04:58:25 -- common/autotest_common.sh@1362 -- # bs=4096 00:18:18.903 04:58:25 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:18:18.903 04:58:26 -- common/autotest_common.sh@1363 -- # nb=26476544 00:18:18.903 04:58:26 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:18:18.903 04:58:26 -- common/autotest_common.sh@1367 -- # echo 103424 00:18:18.903 04:58:26 -- ftl/common.sh@41 -- # local base_size=5171 00:18:18.903 04:58:26 -- ftl/common.sh@44 -- # local nvc_bdev 00:18:18.903 04:58:26 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:18:19.470 04:58:26 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:19.471 04:58:26 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:19.471 04:58:26 -- ftl/common.sh@48 -- # get_bdev_size 7a3c5216-d07c-4e42-b0f5-00065d9ff00f 00:18:19.471 04:58:26 -- common/autotest_common.sh@1357 -- # local bdev_name=7a3c5216-d07c-4e42-b0f5-00065d9ff00f 00:18:19.471 04:58:26 -- common/autotest_common.sh@1358 -- # local bdev_info 00:18:19.471 04:58:26 -- common/autotest_common.sh@1359 -- # local bs 00:18:19.471 04:58:26 -- common/autotest_common.sh@1360 -- # local nb 00:18:19.471 04:58:26 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7a3c5216-d07c-4e42-b0f5-00065d9ff00f 00:18:19.471 04:58:26 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:18:19.471 { 00:18:19.471 "name": "7a3c5216-d07c-4e42-b0f5-00065d9ff00f", 00:18:19.471 "aliases": [ 00:18:19.471 "lvs/nvme0n1p0" 00:18:19.471 ], 00:18:19.471 "product_name": "Logical Volume", 00:18:19.471 "block_size": 4096, 00:18:19.471 "num_blocks": 26476544, 00:18:19.471 "uuid": "7a3c5216-d07c-4e42-b0f5-00065d9ff00f", 00:18:19.471 "assigned_rate_limits": { 00:18:19.471 "rw_ios_per_sec": 0, 00:18:19.471 "rw_mbytes_per_sec": 0, 00:18:19.471 "r_mbytes_per_sec": 0, 00:18:19.471 "w_mbytes_per_sec": 0 00:18:19.471 }, 00:18:19.471 "claimed": false, 00:18:19.471 "zoned": false, 00:18:19.471 "supported_io_types": { 00:18:19.471 "read": true, 00:18:19.471 "write": true, 00:18:19.471 "unmap": true, 00:18:19.471 "write_zeroes": true, 00:18:19.471 "flush": false, 00:18:19.471 "reset": true, 00:18:19.471 "compare": false, 00:18:19.471 "compare_and_write": false, 00:18:19.471 "abort": false, 00:18:19.471 "nvme_admin": false, 00:18:19.471 "nvme_io": false 00:18:19.471 }, 00:18:19.471 "driver_specific": { 00:18:19.471 "lvol": { 00:18:19.471 "lvol_store_uuid": "aee12a09-dab9-45ac-a5c8-31795c6f7c43", 00:18:19.471 "base_bdev": "nvme0n1", 00:18:19.471 "thin_provision": true, 00:18:19.471 "snapshot": false, 00:18:19.471 "clone": false, 00:18:19.471 "esnap_clone": false 00:18:19.471 } 00:18:19.471 } 00:18:19.471 } 00:18:19.471 ]' 00:18:19.471 04:58:26 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:18:19.729 04:58:26 -- common/autotest_common.sh@1362 -- # bs=4096 00:18:19.729 04:58:26 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:18:19.729 04:58:26 -- common/autotest_common.sh@1363 -- # nb=26476544 00:18:19.729 04:58:26 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:18:19.729 04:58:26 -- common/autotest_common.sh@1367 -- # echo 103424 00:18:19.730 04:58:26 -- ftl/common.sh@48 -- # cache_size=5171 00:18:19.730 04:58:26 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:19.988 04:58:26 -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:18:19.988 04:58:26 -- ftl/trim.sh@46 -- # l2p_percentage=60 00:18:19.988 04:58:26 -- ftl/trim.sh@47 -- # get_bdev_size 7a3c5216-d07c-4e42-b0f5-00065d9ff00f 00:18:19.988 04:58:26 -- common/autotest_common.sh@1357 -- # local bdev_name=7a3c5216-d07c-4e42-b0f5-00065d9ff00f 00:18:19.988 04:58:26 -- common/autotest_common.sh@1358 -- # local bdev_info 00:18:19.988 04:58:26 -- common/autotest_common.sh@1359 -- # local bs 00:18:19.988 04:58:26 -- common/autotest_common.sh@1360 -- # local nb 00:18:19.988 04:58:26 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7a3c5216-d07c-4e42-b0f5-00065d9ff00f 00:18:20.247 04:58:27 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:18:20.247 { 00:18:20.247 "name": "7a3c5216-d07c-4e42-b0f5-00065d9ff00f", 00:18:20.247 "aliases": [ 00:18:20.247 "lvs/nvme0n1p0" 00:18:20.247 ], 00:18:20.247 "product_name": "Logical Volume", 00:18:20.247 "block_size": 4096, 00:18:20.247 "num_blocks": 26476544, 00:18:20.247 "uuid": "7a3c5216-d07c-4e42-b0f5-00065d9ff00f", 00:18:20.247 "assigned_rate_limits": { 00:18:20.247 "rw_ios_per_sec": 0, 00:18:20.247 "rw_mbytes_per_sec": 0, 00:18:20.247 "r_mbytes_per_sec": 0, 00:18:20.247 "w_mbytes_per_sec": 0 00:18:20.247 }, 00:18:20.247 "claimed": false, 00:18:20.247 "zoned": false, 00:18:20.247 "supported_io_types": { 00:18:20.247 "read": true, 00:18:20.247 "write": true, 00:18:20.247 "unmap": true, 00:18:20.247 "write_zeroes": true, 00:18:20.247 "flush": false, 00:18:20.247 "reset": true, 00:18:20.247 "compare": false, 00:18:20.247 "compare_and_write": false, 00:18:20.247 "abort": false, 00:18:20.247 "nvme_admin": false, 00:18:20.247 "nvme_io": false 00:18:20.247 }, 00:18:20.247 "driver_specific": { 00:18:20.247 "lvol": { 00:18:20.247 "lvol_store_uuid": "aee12a09-dab9-45ac-a5c8-31795c6f7c43", 00:18:20.247 "base_bdev": "nvme0n1", 00:18:20.247 "thin_provision": true, 00:18:20.247 "snapshot": false, 00:18:20.247 "clone": false, 00:18:20.247 "esnap_clone": false 00:18:20.247 } 00:18:20.247 } 00:18:20.247 } 00:18:20.247 ]' 00:18:20.247 04:58:27 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:18:20.247 04:58:27 -- common/autotest_common.sh@1362 -- # bs=4096 00:18:20.247 04:58:27 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:18:20.247 04:58:27 -- common/autotest_common.sh@1363 -- # nb=26476544 00:18:20.247 04:58:27 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:18:20.247 04:58:27 -- common/autotest_common.sh@1367 -- # echo 103424 00:18:20.247 04:58:27 -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:18:20.247 04:58:27 -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 7a3c5216-d07c-4e42-b0f5-00065d9ff00f -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:18:20.507 [2024-05-12 04:58:27.466305] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.507 [2024-05-12 04:58:27.466374] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:20.507 [2024-05-12 04:58:27.466420] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:20.507 [2024-05-12 04:58:27.466432] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.507 [2024-05-12 04:58:27.469885] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.507 [2024-05-12 04:58:27.469932] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:20.507 [2024-05-12 04:58:27.469970] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.417 ms 00:18:20.507 [2024-05-12 04:58:27.469982] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.507 [2024-05-12 04:58:27.470148] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:20.507 [2024-05-12 04:58:27.471142] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:20.507 [2024-05-12 04:58:27.471189] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.507 [2024-05-12 04:58:27.471205] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:20.507 [2024-05-12 04:58:27.471261] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.053 ms 00:18:20.507 [2024-05-12 04:58:27.471274] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.507 [2024-05-12 04:58:27.471500] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID c7a363c9-0ae1-4712-9c4d-0d43e1b35de1 00:18:20.507 [2024-05-12 04:58:27.472642] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.507 [2024-05-12 04:58:27.472684] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:20.507 [2024-05-12 04:58:27.472717] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:20.507 [2024-05-12 04:58:27.472729] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.507 [2024-05-12 04:58:27.477602] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.507 [2024-05-12 04:58:27.477663] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:20.507 [2024-05-12 04:58:27.477697] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.788 ms 00:18:20.507 [2024-05-12 04:58:27.477710] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.507 [2024-05-12 04:58:27.477899] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.507 [2024-05-12 04:58:27.477926] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:20.507 [2024-05-12 04:58:27.477940] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:18:20.507 [2024-05-12 04:58:27.477957] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.507 [2024-05-12 04:58:27.478008] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.507 [2024-05-12 04:58:27.478026] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:20.507 [2024-05-12 04:58:27.478039] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:20.507 [2024-05-12 04:58:27.478054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.507 [2024-05-12 04:58:27.478105] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:20.507 [2024-05-12 04:58:27.482606] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.507 [2024-05-12 04:58:27.482647] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:20.507 [2024-05-12 04:58:27.482682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.510 ms 00:18:20.507 [2024-05-12 04:58:27.482693] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.507 [2024-05-12 04:58:27.482780] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.507 [2024-05-12 04:58:27.482798] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:20.507 [2024-05-12 04:58:27.482812] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:20.507 [2024-05-12 04:58:27.482823] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.507 [2024-05-12 04:58:27.482877] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:20.507 [2024-05-12 04:58:27.483029] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:20.507 [2024-05-12 04:58:27.483052] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:20.507 [2024-05-12 04:58:27.483067] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:20.507 [2024-05-12 04:58:27.483083] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:20.507 [2024-05-12 04:58:27.483096] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:20.507 [2024-05-12 04:58:27.483109] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:20.507 [2024-05-12 04:58:27.483120] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:20.507 [2024-05-12 04:58:27.483134] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:20.507 [2024-05-12 04:58:27.483146] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:20.507 [2024-05-12 04:58:27.483159] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.507 [2024-05-12 04:58:27.483170] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:20.507 [2024-05-12 04:58:27.483184] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:18:20.507 [2024-05-12 04:58:27.483210] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.507 [2024-05-12 04:58:27.483335] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.507 [2024-05-12 04:58:27.483354] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:20.507 [2024-05-12 04:58:27.483384] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:18:20.507 [2024-05-12 04:58:27.483396] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.507 [2024-05-12 04:58:27.483512] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:20.507 [2024-05-12 04:58:27.483528] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:20.507 [2024-05-12 04:58:27.483543] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:20.507 [2024-05-12 04:58:27.483554] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:20.507 [2024-05-12 04:58:27.483568] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:20.507 [2024-05-12 04:58:27.483578] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:20.507 [2024-05-12 04:58:27.483591] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:20.507 [2024-05-12 04:58:27.483601] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:20.507 [2024-05-12 04:58:27.483613] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:20.507 [2024-05-12 04:58:27.483623] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:20.507 [2024-05-12 04:58:27.483651] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:20.507 [2024-05-12 04:58:27.483662] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:20.507 [2024-05-12 04:58:27.483676] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:20.507 [2024-05-12 04:58:27.483687] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:20.507 [2024-05-12 04:58:27.483700] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:18:20.507 [2024-05-12 04:58:27.483711] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:20.507 [2024-05-12 04:58:27.483725] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:20.507 [2024-05-12 04:58:27.483736] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:18:20.507 [2024-05-12 04:58:27.483749] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:20.507 [2024-05-12 04:58:27.483759] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:20.507 [2024-05-12 04:58:27.483772] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:18:20.507 [2024-05-12 04:58:27.483783] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:20.507 [2024-05-12 04:58:27.483795] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:20.507 [2024-05-12 04:58:27.483806] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:20.507 [2024-05-12 04:58:27.483819] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:20.507 [2024-05-12 04:58:27.483829] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:20.507 [2024-05-12 04:58:27.483842] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:18:20.507 [2024-05-12 04:58:27.483854] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:20.507 [2024-05-12 04:58:27.483867] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:20.508 [2024-05-12 04:58:27.483889] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:20.508 [2024-05-12 04:58:27.483903] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:20.508 [2024-05-12 04:58:27.483926] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:20.508 [2024-05-12 04:58:27.483940] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:18:20.508 [2024-05-12 04:58:27.483951] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:20.508 [2024-05-12 04:58:27.483963] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:20.508 [2024-05-12 04:58:27.483973] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:20.508 [2024-05-12 04:58:27.483986] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:20.508 [2024-05-12 04:58:27.483996] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:20.508 [2024-05-12 04:58:27.484010] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:18:20.508 [2024-05-12 04:58:27.484020] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:20.508 [2024-05-12 04:58:27.484032] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:20.508 [2024-05-12 04:58:27.484044] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:20.508 [2024-05-12 04:58:27.484057] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:20.508 [2024-05-12 04:58:27.484068] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:20.508 [2024-05-12 04:58:27.484081] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:20.508 [2024-05-12 04:58:27.484092] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:20.508 [2024-05-12 04:58:27.484105] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:20.508 [2024-05-12 04:58:27.484116] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:20.508 [2024-05-12 04:58:27.484130] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:20.508 [2024-05-12 04:58:27.484141] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:20.508 [2024-05-12 04:58:27.484155] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:20.508 [2024-05-12 04:58:27.484169] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:20.508 [2024-05-12 04:58:27.484184] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:20.508 [2024-05-12 04:58:27.484195] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:18:20.508 [2024-05-12 04:58:27.484209] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:18:20.508 [2024-05-12 04:58:27.484234] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:18:20.508 [2024-05-12 04:58:27.484250] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:18:20.508 [2024-05-12 04:58:27.484262] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:18:20.508 [2024-05-12 04:58:27.484282] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:18:20.508 [2024-05-12 04:58:27.484294] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:18:20.508 [2024-05-12 04:58:27.484308] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:18:20.508 [2024-05-12 04:58:27.484319] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:18:20.508 [2024-05-12 04:58:27.484333] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:18:20.508 [2024-05-12 04:58:27.484345] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:18:20.508 [2024-05-12 04:58:27.484363] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:18:20.508 [2024-05-12 04:58:27.484374] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:20.508 [2024-05-12 04:58:27.484392] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:20.508 [2024-05-12 04:58:27.484404] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:20.508 [2024-05-12 04:58:27.484418] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:20.508 [2024-05-12 04:58:27.484429] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:20.508 [2024-05-12 04:58:27.484443] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:20.508 [2024-05-12 04:58:27.484455] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.508 [2024-05-12 04:58:27.484469] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:20.508 [2024-05-12 04:58:27.484481] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.992 ms 00:18:20.508 [2024-05-12 04:58:27.484493] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.508 [2024-05-12 04:58:27.502469] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.508 [2024-05-12 04:58:27.502521] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:20.508 [2024-05-12 04:58:27.502562] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.884 ms 00:18:20.508 [2024-05-12 04:58:27.502580] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.508 [2024-05-12 04:58:27.502744] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.508 [2024-05-12 04:58:27.502771] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:20.508 [2024-05-12 04:58:27.502786] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:18:20.508 [2024-05-12 04:58:27.502799] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.508 [2024-05-12 04:58:27.543467] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.508 [2024-05-12 04:58:27.543531] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:20.508 [2024-05-12 04:58:27.543580] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.621 ms 00:18:20.508 [2024-05-12 04:58:27.543593] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.508 [2024-05-12 04:58:27.543706] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.508 [2024-05-12 04:58:27.543747] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:20.508 [2024-05-12 04:58:27.543761] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:20.508 [2024-05-12 04:58:27.543773] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.508 [2024-05-12 04:58:27.544159] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.508 [2024-05-12 04:58:27.544184] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:20.508 [2024-05-12 04:58:27.544201] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.349 ms 00:18:20.508 [2024-05-12 04:58:27.544214] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.508 [2024-05-12 04:58:27.544465] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.508 [2024-05-12 04:58:27.544489] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:20.508 [2024-05-12 04:58:27.544502] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:18:20.508 [2024-05-12 04:58:27.544515] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.508 [2024-05-12 04:58:27.577911] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.508 [2024-05-12 04:58:27.577964] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:20.508 [2024-05-12 04:58:27.577999] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.359 ms 00:18:20.508 [2024-05-12 04:58:27.578013] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.508 [2024-05-12 04:58:27.591215] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:20.508 [2024-05-12 04:58:27.605472] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.508 [2024-05-12 04:58:27.605542] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:20.508 [2024-05-12 04:58:27.605581] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.253 ms 00:18:20.508 [2024-05-12 04:58:27.605593] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.767 [2024-05-12 04:58:27.676105] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.767 [2024-05-12 04:58:27.676177] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:20.767 [2024-05-12 04:58:27.676234] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.381 ms 00:18:20.767 [2024-05-12 04:58:27.676261] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.767 [2024-05-12 04:58:27.676430] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:18:20.767 [2024-05-12 04:58:27.676454] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:18:23.301 [2024-05-12 04:58:29.997942] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.301 [2024-05-12 04:58:29.998012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:23.301 [2024-05-12 04:58:29.998052] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2321.523 ms 00:18:23.301 [2024-05-12 04:58:29.998064] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.301 [2024-05-12 04:58:29.998383] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.301 [2024-05-12 04:58:29.998407] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:23.301 [2024-05-12 04:58:29.998423] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:18:23.301 [2024-05-12 04:58:29.998434] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.301 [2024-05-12 04:58:30.032077] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.301 [2024-05-12 04:58:30.032137] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:23.301 [2024-05-12 04:58:30.032161] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.586 ms 00:18:23.301 [2024-05-12 04:58:30.032174] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.301 [2024-05-12 04:58:30.061892] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.301 [2024-05-12 04:58:30.061933] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:23.301 [2024-05-12 04:58:30.061977] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.552 ms 00:18:23.301 [2024-05-12 04:58:30.061989] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.301 [2024-05-12 04:58:30.062446] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.301 [2024-05-12 04:58:30.062468] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:23.301 [2024-05-12 04:58:30.062483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:18:23.301 [2024-05-12 04:58:30.062494] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.301 [2024-05-12 04:58:30.138838] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.301 [2024-05-12 04:58:30.138892] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:23.301 [2024-05-12 04:58:30.138929] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.299 ms 00:18:23.301 [2024-05-12 04:58:30.138941] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.301 [2024-05-12 04:58:30.169765] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.301 [2024-05-12 04:58:30.169806] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:23.301 [2024-05-12 04:58:30.169841] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.722 ms 00:18:23.301 [2024-05-12 04:58:30.169872] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.301 [2024-05-12 04:58:30.173794] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.301 [2024-05-12 04:58:30.173834] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:23.301 [2024-05-12 04:58:30.173870] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.826 ms 00:18:23.301 [2024-05-12 04:58:30.173882] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.301 [2024-05-12 04:58:30.203703] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.301 [2024-05-12 04:58:30.203745] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:23.301 [2024-05-12 04:58:30.203780] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.774 ms 00:18:23.301 [2024-05-12 04:58:30.203792] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.301 [2024-05-12 04:58:30.203933] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.301 [2024-05-12 04:58:30.203953] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:23.301 [2024-05-12 04:58:30.203969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:23.301 [2024-05-12 04:58:30.203981] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.301 [2024-05-12 04:58:30.204077] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.302 [2024-05-12 04:58:30.204093] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:23.302 [2024-05-12 04:58:30.204110] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:18:23.302 [2024-05-12 04:58:30.204121] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.302 [2024-05-12 04:58:30.205324] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:23.302 [2024-05-12 04:58:30.209506] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2738.608 ms, result 0 00:18:23.302 [2024-05-12 04:58:30.210540] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:23.302 { 00:18:23.302 "name": "ftl0", 00:18:23.302 "uuid": "c7a363c9-0ae1-4712-9c4d-0d43e1b35de1" 00:18:23.302 } 00:18:23.302 04:58:30 -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:18:23.302 04:58:30 -- common/autotest_common.sh@887 -- # local bdev_name=ftl0 00:18:23.302 04:58:30 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:23.302 04:58:30 -- common/autotest_common.sh@889 -- # local i 00:18:23.302 04:58:30 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:23.302 04:58:30 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:23.302 04:58:30 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:23.560 04:58:30 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:23.819 [ 00:18:23.819 { 00:18:23.819 "name": "ftl0", 00:18:23.819 "aliases": [ 00:18:23.819 "c7a363c9-0ae1-4712-9c4d-0d43e1b35de1" 00:18:23.819 ], 00:18:23.819 "product_name": "FTL disk", 00:18:23.819 "block_size": 4096, 00:18:23.819 "num_blocks": 23592960, 00:18:23.819 "uuid": "c7a363c9-0ae1-4712-9c4d-0d43e1b35de1", 00:18:23.819 "assigned_rate_limits": { 00:18:23.819 "rw_ios_per_sec": 0, 00:18:23.819 "rw_mbytes_per_sec": 0, 00:18:23.819 "r_mbytes_per_sec": 0, 00:18:23.819 "w_mbytes_per_sec": 0 00:18:23.819 }, 00:18:23.819 "claimed": false, 00:18:23.819 "zoned": false, 00:18:23.819 "supported_io_types": { 00:18:23.819 "read": true, 00:18:23.819 "write": true, 00:18:23.819 "unmap": true, 00:18:23.819 "write_zeroes": true, 00:18:23.819 "flush": true, 00:18:23.819 "reset": false, 00:18:23.819 "compare": false, 00:18:23.819 "compare_and_write": false, 00:18:23.819 "abort": false, 00:18:23.819 "nvme_admin": false, 00:18:23.819 "nvme_io": false 00:18:23.819 }, 00:18:23.819 "driver_specific": { 00:18:23.819 "ftl": { 00:18:23.819 "base_bdev": "7a3c5216-d07c-4e42-b0f5-00065d9ff00f", 00:18:23.819 "cache": "nvc0n1p0" 00:18:23.819 } 00:18:23.819 } 00:18:23.819 } 00:18:23.819 ] 00:18:23.819 04:58:30 -- common/autotest_common.sh@895 -- # return 0 00:18:23.819 04:58:30 -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:18:23.819 04:58:30 -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:24.079 04:58:31 -- ftl/trim.sh@56 -- # echo ']}' 00:18:24.079 04:58:31 -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:18:24.337 04:58:31 -- ftl/trim.sh@59 -- # bdev_info='[ 00:18:24.337 { 00:18:24.337 "name": "ftl0", 00:18:24.337 "aliases": [ 00:18:24.337 "c7a363c9-0ae1-4712-9c4d-0d43e1b35de1" 00:18:24.337 ], 00:18:24.337 "product_name": "FTL disk", 00:18:24.337 "block_size": 4096, 00:18:24.337 "num_blocks": 23592960, 00:18:24.337 "uuid": "c7a363c9-0ae1-4712-9c4d-0d43e1b35de1", 00:18:24.337 "assigned_rate_limits": { 00:18:24.337 "rw_ios_per_sec": 0, 00:18:24.337 "rw_mbytes_per_sec": 0, 00:18:24.337 "r_mbytes_per_sec": 0, 00:18:24.337 "w_mbytes_per_sec": 0 00:18:24.337 }, 00:18:24.337 "claimed": false, 00:18:24.337 "zoned": false, 00:18:24.337 "supported_io_types": { 00:18:24.337 "read": true, 00:18:24.337 "write": true, 00:18:24.337 "unmap": true, 00:18:24.337 "write_zeroes": true, 00:18:24.337 "flush": true, 00:18:24.337 "reset": false, 00:18:24.337 "compare": false, 00:18:24.337 "compare_and_write": false, 00:18:24.337 "abort": false, 00:18:24.337 "nvme_admin": false, 00:18:24.337 "nvme_io": false 00:18:24.337 }, 00:18:24.337 "driver_specific": { 00:18:24.337 "ftl": { 00:18:24.337 "base_bdev": "7a3c5216-d07c-4e42-b0f5-00065d9ff00f", 00:18:24.337 "cache": "nvc0n1p0" 00:18:24.337 } 00:18:24.337 } 00:18:24.337 } 00:18:24.337 ]' 00:18:24.337 04:58:31 -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:18:24.337 04:58:31 -- ftl/trim.sh@60 -- # nb=23592960 00:18:24.337 04:58:31 -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:24.596 [2024-05-12 04:58:31.558858] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.596 [2024-05-12 04:58:31.558935] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:24.596 [2024-05-12 04:58:31.558956] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:24.596 [2024-05-12 04:58:31.558970] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.596 [2024-05-12 04:58:31.559019] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:24.596 [2024-05-12 04:58:31.562336] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.596 [2024-05-12 04:58:31.562370] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:24.596 [2024-05-12 04:58:31.562389] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.290 ms 00:18:24.596 [2024-05-12 04:58:31.562401] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.596 [2024-05-12 04:58:31.563035] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.596 [2024-05-12 04:58:31.563066] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:24.596 [2024-05-12 04:58:31.563086] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.591 ms 00:18:24.596 [2024-05-12 04:58:31.563099] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.596 [2024-05-12 04:58:31.566919] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.596 [2024-05-12 04:58:31.566949] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:24.596 [2024-05-12 04:58:31.566984] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.779 ms 00:18:24.597 [2024-05-12 04:58:31.566996] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.597 [2024-05-12 04:58:31.574443] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.597 [2024-05-12 04:58:31.574476] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:18:24.597 [2024-05-12 04:58:31.574513] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.367 ms 00:18:24.597 [2024-05-12 04:58:31.574524] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.597 [2024-05-12 04:58:31.606418] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.597 [2024-05-12 04:58:31.606459] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:24.597 [2024-05-12 04:58:31.606496] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.795 ms 00:18:24.597 [2024-05-12 04:58:31.606507] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.597 [2024-05-12 04:58:31.626008] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.597 [2024-05-12 04:58:31.626055] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:24.597 [2024-05-12 04:58:31.626093] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.409 ms 00:18:24.597 [2024-05-12 04:58:31.626105] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.597 [2024-05-12 04:58:31.626412] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.597 [2024-05-12 04:58:31.626435] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:24.597 [2024-05-12 04:58:31.626453] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:18:24.597 [2024-05-12 04:58:31.626465] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.597 [2024-05-12 04:58:31.656569] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.597 [2024-05-12 04:58:31.656623] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:24.597 [2024-05-12 04:58:31.656658] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.060 ms 00:18:24.597 [2024-05-12 04:58:31.656669] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.597 [2024-05-12 04:58:31.686435] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.597 [2024-05-12 04:58:31.686487] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:24.597 [2024-05-12 04:58:31.686524] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.673 ms 00:18:24.597 [2024-05-12 04:58:31.686536] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.597 [2024-05-12 04:58:31.716280] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.597 [2024-05-12 04:58:31.716334] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:24.597 [2024-05-12 04:58:31.716371] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.649 ms 00:18:24.597 [2024-05-12 04:58:31.716382] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.857 [2024-05-12 04:58:31.748745] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.857 [2024-05-12 04:58:31.748786] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:24.857 [2024-05-12 04:58:31.748825] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.215 ms 00:18:24.857 [2024-05-12 04:58:31.748836] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.857 [2024-05-12 04:58:31.748953] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:24.857 [2024-05-12 04:58:31.748979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:24.857 [2024-05-12 04:58:31.748999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:24.857 [2024-05-12 04:58:31.749012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:24.857 [2024-05-12 04:58:31.749025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:24.857 [2024-05-12 04:58:31.749036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:24.857 [2024-05-12 04:58:31.749049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:24.857 [2024-05-12 04:58:31.749061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:24.857 [2024-05-12 04:58:31.749074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:24.857 [2024-05-12 04:58:31.749085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:24.857 [2024-05-12 04:58:31.749098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:24.857 [2024-05-12 04:58:31.749110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:24.857 [2024-05-12 04:58:31.749123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:24.857 [2024-05-12 04:58:31.749134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:24.857 [2024-05-12 04:58:31.749149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:24.857 [2024-05-12 04:58:31.749161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:24.857 [2024-05-12 04:58:31.749177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:24.857 [2024-05-12 04:58:31.749188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:24.857 [2024-05-12 04:58:31.749202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:24.857 [2024-05-12 04:58:31.749228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:24.857 [2024-05-12 04:58:31.749284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:24.857 [2024-05-12 04:58:31.749296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:24.857 [2024-05-12 04:58:31.749311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:24.857 [2024-05-12 04:58:31.749323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.749337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.749349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.749363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.749376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.749389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.749401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.749417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.749429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.749464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.749476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.749491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.749503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.749517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.749529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.749543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.749555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.749568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.749580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.749596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.749623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.749636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.749648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.749664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.749675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.749703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.749714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.749727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.749738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.749751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.749761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.749774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.749786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.749799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.749810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.749822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.749833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.749846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.749857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.749872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.749883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.749895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.749907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.749921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.749932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.749947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.749959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.749972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.749983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.749995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.750007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.750019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.750030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.750042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.750054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.750068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.750079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.750092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.750104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.750116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.750127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.750140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.750151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.750164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.750175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.750188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.750199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.750211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.750222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.750252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.750277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.750297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.750309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.750323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.750335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.750349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.750361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.750386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:24.858 [2024-05-12 04:58:31.750408] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:24.858 [2024-05-12 04:58:31.750422] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c7a363c9-0ae1-4712-9c4d-0d43e1b35de1 00:18:24.858 [2024-05-12 04:58:31.750434] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:24.858 [2024-05-12 04:58:31.750447] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:24.858 [2024-05-12 04:58:31.750458] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:24.858 [2024-05-12 04:58:31.750471] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:24.858 [2024-05-12 04:58:31.750482] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:24.858 [2024-05-12 04:58:31.750495] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:24.858 [2024-05-12 04:58:31.750506] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:24.858 [2024-05-12 04:58:31.750520] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:24.858 [2024-05-12 04:58:31.750530] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:24.858 [2024-05-12 04:58:31.750543] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.858 [2024-05-12 04:58:31.750555] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:24.858 [2024-05-12 04:58:31.750569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.595 ms 00:18:24.858 [2024-05-12 04:58:31.750583] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.858 [2024-05-12 04:58:31.767439] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.858 [2024-05-12 04:58:31.767476] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:24.858 [2024-05-12 04:58:31.767512] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.809 ms 00:18:24.858 [2024-05-12 04:58:31.767523] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.859 [2024-05-12 04:58:31.767793] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.859 [2024-05-12 04:58:31.767812] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:24.859 [2024-05-12 04:58:31.767829] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:18:24.859 [2024-05-12 04:58:31.767856] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.859 [2024-05-12 04:58:31.825026] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:24.859 [2024-05-12 04:58:31.825078] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:24.859 [2024-05-12 04:58:31.825117] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:24.859 [2024-05-12 04:58:31.825129] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.859 [2024-05-12 04:58:31.825307] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:24.859 [2024-05-12 04:58:31.825328] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:24.859 [2024-05-12 04:58:31.825346] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:24.859 [2024-05-12 04:58:31.825357] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.859 [2024-05-12 04:58:31.825445] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:24.859 [2024-05-12 04:58:31.825464] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:24.859 [2024-05-12 04:58:31.825478] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:24.859 [2024-05-12 04:58:31.825489] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.859 [2024-05-12 04:58:31.825530] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:24.859 [2024-05-12 04:58:31.825544] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:24.859 [2024-05-12 04:58:31.825558] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:24.859 [2024-05-12 04:58:31.825571] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.859 [2024-05-12 04:58:31.938084] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:24.859 [2024-05-12 04:58:31.938350] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:24.859 [2024-05-12 04:58:31.938483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:24.859 [2024-05-12 04:58:31.938624] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.859 [2024-05-12 04:58:31.976281] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:24.859 [2024-05-12 04:58:31.976494] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:24.859 [2024-05-12 04:58:31.976620] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:24.859 [2024-05-12 04:58:31.976676] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.859 [2024-05-12 04:58:31.976795] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:24.859 [2024-05-12 04:58:31.976850] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:24.859 [2024-05-12 04:58:31.976957] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:24.859 [2024-05-12 04:58:31.977007] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.859 [2024-05-12 04:58:31.977143] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:24.859 [2024-05-12 04:58:31.977191] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:24.859 [2024-05-12 04:58:31.977263] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:24.859 [2024-05-12 04:58:31.977376] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.859 [2024-05-12 04:58:31.977572] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:24.859 [2024-05-12 04:58:31.977706] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:24.859 [2024-05-12 04:58:31.977739] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:24.859 [2024-05-12 04:58:31.977753] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.859 [2024-05-12 04:58:31.977864] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:24.859 [2024-05-12 04:58:31.977884] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:24.859 [2024-05-12 04:58:31.977899] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:24.859 [2024-05-12 04:58:31.977911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.859 [2024-05-12 04:58:31.977976] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:24.859 [2024-05-12 04:58:31.977992] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:24.859 [2024-05-12 04:58:31.978007] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:24.859 [2024-05-12 04:58:31.978018] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.859 [2024-05-12 04:58:31.978092] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:24.859 [2024-05-12 04:58:31.978110] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:24.859 [2024-05-12 04:58:31.978124] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:24.859 [2024-05-12 04:58:31.978136] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.859 [2024-05-12 04:58:31.978379] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 419.504 ms, result 0 00:18:25.118 true 00:18:25.118 04:58:32 -- ftl/trim.sh@63 -- # killprocess 73232 00:18:25.118 04:58:32 -- common/autotest_common.sh@926 -- # '[' -z 73232 ']' 00:18:25.118 04:58:32 -- common/autotest_common.sh@930 -- # kill -0 73232 00:18:25.118 04:58:32 -- common/autotest_common.sh@931 -- # uname 00:18:25.118 04:58:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:25.118 04:58:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73232 00:18:25.118 killing process with pid 73232 00:18:25.118 04:58:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:25.118 04:58:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:25.118 04:58:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73232' 00:18:25.118 04:58:32 -- common/autotest_common.sh@945 -- # kill 73232 00:18:25.118 04:58:32 -- common/autotest_common.sh@950 -- # wait 73232 00:18:29.306 04:58:36 -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:18:30.684 65536+0 records in 00:18:30.685 65536+0 records out 00:18:30.685 268435456 bytes (268 MB, 256 MiB) copied, 1.07805 s, 249 MB/s 00:18:30.685 04:58:37 -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:30.685 [2024-05-12 04:58:37.568441] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:18:30.685 [2024-05-12 04:58:37.568580] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73428 ] 00:18:30.685 [2024-05-12 04:58:37.728458] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.944 [2024-05-12 04:58:37.948423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.203 [2024-05-12 04:58:38.243521] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:31.204 [2024-05-12 04:58:38.243618] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:31.465 [2024-05-12 04:58:38.398750] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.465 [2024-05-12 04:58:38.398824] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:31.465 [2024-05-12 04:58:38.398859] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:31.465 [2024-05-12 04:58:38.398874] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.465 [2024-05-12 04:58:38.402064] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.465 [2024-05-12 04:58:38.402108] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:31.465 [2024-05-12 04:58:38.402141] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.162 ms 00:18:31.465 [2024-05-12 04:58:38.402152] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.465 [2024-05-12 04:58:38.402393] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:31.465 [2024-05-12 04:58:38.403348] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:31.465 [2024-05-12 04:58:38.403433] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.465 [2024-05-12 04:58:38.403467] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:31.465 [2024-05-12 04:58:38.403480] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.053 ms 00:18:31.465 [2024-05-12 04:58:38.403491] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.465 [2024-05-12 04:58:38.404958] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:31.465 [2024-05-12 04:58:38.419687] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.465 [2024-05-12 04:58:38.419729] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:31.465 [2024-05-12 04:58:38.419764] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.730 ms 00:18:31.465 [2024-05-12 04:58:38.419774] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.465 [2024-05-12 04:58:38.419925] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.465 [2024-05-12 04:58:38.419948] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:31.465 [2024-05-12 04:58:38.419965] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:18:31.465 [2024-05-12 04:58:38.419976] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.465 [2024-05-12 04:58:38.424352] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.465 [2024-05-12 04:58:38.424388] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:31.465 [2024-05-12 04:58:38.424419] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.318 ms 00:18:31.465 [2024-05-12 04:58:38.424428] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.465 [2024-05-12 04:58:38.424551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.465 [2024-05-12 04:58:38.424574] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:31.465 [2024-05-12 04:58:38.424586] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:18:31.465 [2024-05-12 04:58:38.424596] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.465 [2024-05-12 04:58:38.424635] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.465 [2024-05-12 04:58:38.424649] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:31.465 [2024-05-12 04:58:38.424659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:31.465 [2024-05-12 04:58:38.424668] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.465 [2024-05-12 04:58:38.424712] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:31.465 [2024-05-12 04:58:38.428787] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.466 [2024-05-12 04:58:38.428825] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:31.466 [2024-05-12 04:58:38.428856] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.098 ms 00:18:31.466 [2024-05-12 04:58:38.428866] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.466 [2024-05-12 04:58:38.428940] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.466 [2024-05-12 04:58:38.428969] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:31.466 [2024-05-12 04:58:38.428981] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:31.466 [2024-05-12 04:58:38.428991] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.466 [2024-05-12 04:58:38.429036] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:31.466 [2024-05-12 04:58:38.429070] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:18:31.466 [2024-05-12 04:58:38.429108] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:31.466 [2024-05-12 04:58:38.429125] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:18:31.466 [2024-05-12 04:58:38.429204] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:31.466 [2024-05-12 04:58:38.429268] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:31.466 [2024-05-12 04:58:38.429285] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:31.466 [2024-05-12 04:58:38.429298] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:31.466 [2024-05-12 04:58:38.429311] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:31.466 [2024-05-12 04:58:38.429321] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:31.466 [2024-05-12 04:58:38.429331] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:31.466 [2024-05-12 04:58:38.429341] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:31.466 [2024-05-12 04:58:38.429350] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:31.466 [2024-05-12 04:58:38.429361] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.466 [2024-05-12 04:58:38.429370] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:31.466 [2024-05-12 04:58:38.429392] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:18:31.466 [2024-05-12 04:58:38.429402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.466 [2024-05-12 04:58:38.429482] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.466 [2024-05-12 04:58:38.429499] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:31.466 [2024-05-12 04:58:38.429510] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:18:31.466 [2024-05-12 04:58:38.429519] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.466 [2024-05-12 04:58:38.429662] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:31.466 [2024-05-12 04:58:38.429679] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:31.466 [2024-05-12 04:58:38.429690] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:31.466 [2024-05-12 04:58:38.429715] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:31.466 [2024-05-12 04:58:38.429727] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:31.466 [2024-05-12 04:58:38.429738] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:31.466 [2024-05-12 04:58:38.429748] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:31.466 [2024-05-12 04:58:38.429758] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:31.466 [2024-05-12 04:58:38.429768] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:31.466 [2024-05-12 04:58:38.429777] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:31.466 [2024-05-12 04:58:38.429787] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:31.466 [2024-05-12 04:58:38.429797] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:31.466 [2024-05-12 04:58:38.429807] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:31.466 [2024-05-12 04:58:38.429816] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:31.466 [2024-05-12 04:58:38.429827] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:18:31.466 [2024-05-12 04:58:38.429837] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:31.466 [2024-05-12 04:58:38.429846] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:31.466 [2024-05-12 04:58:38.429856] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:18:31.466 [2024-05-12 04:58:38.429865] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:31.466 [2024-05-12 04:58:38.429893] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:31.466 [2024-05-12 04:58:38.429903] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:18:31.466 [2024-05-12 04:58:38.429913] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:31.466 [2024-05-12 04:58:38.429922] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:31.466 [2024-05-12 04:58:38.429932] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:31.466 [2024-05-12 04:58:38.429941] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:31.466 [2024-05-12 04:58:38.429950] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:31.466 [2024-05-12 04:58:38.429959] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:18:31.466 [2024-05-12 04:58:38.429969] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:31.466 [2024-05-12 04:58:38.429979] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:31.466 [2024-05-12 04:58:38.429988] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:31.466 [2024-05-12 04:58:38.429997] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:31.466 [2024-05-12 04:58:38.430006] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:31.466 [2024-05-12 04:58:38.430015] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:18:31.466 [2024-05-12 04:58:38.430025] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:31.466 [2024-05-12 04:58:38.430034] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:31.466 [2024-05-12 04:58:38.430044] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:31.466 [2024-05-12 04:58:38.430052] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:31.466 [2024-05-12 04:58:38.430062] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:31.466 [2024-05-12 04:58:38.430071] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:18:31.466 [2024-05-12 04:58:38.430080] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:31.466 [2024-05-12 04:58:38.430089] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:31.466 [2024-05-12 04:58:38.430099] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:31.466 [2024-05-12 04:58:38.430110] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:31.466 [2024-05-12 04:58:38.430120] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:31.466 [2024-05-12 04:58:38.430131] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:31.466 [2024-05-12 04:58:38.430141] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:31.466 [2024-05-12 04:58:38.430151] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:31.466 [2024-05-12 04:58:38.430160] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:31.466 [2024-05-12 04:58:38.430170] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:31.466 [2024-05-12 04:58:38.430180] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:31.466 [2024-05-12 04:58:38.430191] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:31.466 [2024-05-12 04:58:38.430234] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:31.466 [2024-05-12 04:58:38.430250] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:31.466 [2024-05-12 04:58:38.430261] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:18:31.466 [2024-05-12 04:58:38.430272] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:18:31.466 [2024-05-12 04:58:38.430282] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:18:31.466 [2024-05-12 04:58:38.430292] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:18:31.466 [2024-05-12 04:58:38.430302] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:18:31.466 [2024-05-12 04:58:38.430312] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:18:31.466 [2024-05-12 04:58:38.430323] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:18:31.466 [2024-05-12 04:58:38.430333] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:18:31.466 [2024-05-12 04:58:38.430343] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:18:31.466 [2024-05-12 04:58:38.430353] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:18:31.466 [2024-05-12 04:58:38.430364] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:18:31.466 [2024-05-12 04:58:38.430375] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:18:31.466 [2024-05-12 04:58:38.430385] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:31.466 [2024-05-12 04:58:38.430397] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:31.466 [2024-05-12 04:58:38.430408] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:31.466 [2024-05-12 04:58:38.430418] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:31.466 [2024-05-12 04:58:38.430429] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:31.466 [2024-05-12 04:58:38.430439] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:31.466 [2024-05-12 04:58:38.430450] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.466 [2024-05-12 04:58:38.430489] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:31.467 [2024-05-12 04:58:38.430501] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.876 ms 00:18:31.467 [2024-05-12 04:58:38.430512] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.467 [2024-05-12 04:58:38.447365] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.467 [2024-05-12 04:58:38.447415] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:31.467 [2024-05-12 04:58:38.447449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.759 ms 00:18:31.467 [2024-05-12 04:58:38.447460] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.467 [2024-05-12 04:58:38.447607] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.467 [2024-05-12 04:58:38.447626] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:31.467 [2024-05-12 04:58:38.447638] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:18:31.467 [2024-05-12 04:58:38.447648] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.467 [2024-05-12 04:58:38.492765] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.467 [2024-05-12 04:58:38.492825] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:31.467 [2024-05-12 04:58:38.492876] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.087 ms 00:18:31.467 [2024-05-12 04:58:38.492887] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.467 [2024-05-12 04:58:38.493046] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.467 [2024-05-12 04:58:38.493066] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:31.467 [2024-05-12 04:58:38.493078] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:31.467 [2024-05-12 04:58:38.493089] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.467 [2024-05-12 04:58:38.493520] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.467 [2024-05-12 04:58:38.493541] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:31.467 [2024-05-12 04:58:38.493553] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:18:31.467 [2024-05-12 04:58:38.493563] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.467 [2024-05-12 04:58:38.493727] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.467 [2024-05-12 04:58:38.493775] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:31.467 [2024-05-12 04:58:38.493802] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:18:31.467 [2024-05-12 04:58:38.493812] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.467 [2024-05-12 04:58:38.509612] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.467 [2024-05-12 04:58:38.509653] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:31.467 [2024-05-12 04:58:38.509670] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.772 ms 00:18:31.467 [2024-05-12 04:58:38.509681] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.467 [2024-05-12 04:58:38.525893] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:31.467 [2024-05-12 04:58:38.525938] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:31.467 [2024-05-12 04:58:38.525960] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.467 [2024-05-12 04:58:38.525972] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:31.467 [2024-05-12 04:58:38.525984] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.111 ms 00:18:31.467 [2024-05-12 04:58:38.525995] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.467 [2024-05-12 04:58:38.556090] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.467 [2024-05-12 04:58:38.556132] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:31.467 [2024-05-12 04:58:38.556165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.983 ms 00:18:31.467 [2024-05-12 04:58:38.556176] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.467 [2024-05-12 04:58:38.570424] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.467 [2024-05-12 04:58:38.570464] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:31.467 [2024-05-12 04:58:38.570480] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.100 ms 00:18:31.467 [2024-05-12 04:58:38.570490] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.467 [2024-05-12 04:58:38.584810] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.467 [2024-05-12 04:58:38.584880] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:31.467 [2024-05-12 04:58:38.584943] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.239 ms 00:18:31.467 [2024-05-12 04:58:38.584954] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.467 [2024-05-12 04:58:38.585564] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.467 [2024-05-12 04:58:38.585588] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:31.467 [2024-05-12 04:58:38.585601] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.492 ms 00:18:31.467 [2024-05-12 04:58:38.585612] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.738 [2024-05-12 04:58:38.663813] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.738 [2024-05-12 04:58:38.663933] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:31.738 [2024-05-12 04:58:38.663972] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.165 ms 00:18:31.738 [2024-05-12 04:58:38.663985] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.738 [2024-05-12 04:58:38.676542] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:31.738 [2024-05-12 04:58:38.690584] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.738 [2024-05-12 04:58:38.690651] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:31.738 [2024-05-12 04:58:38.690687] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.445 ms 00:18:31.738 [2024-05-12 04:58:38.690697] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.738 [2024-05-12 04:58:38.690820] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.738 [2024-05-12 04:58:38.690854] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:31.738 [2024-05-12 04:58:38.690883] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:31.738 [2024-05-12 04:58:38.690894] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.738 [2024-05-12 04:58:38.690956] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.738 [2024-05-12 04:58:38.690977] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:31.738 [2024-05-12 04:58:38.690988] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:31.738 [2024-05-12 04:58:38.691011] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.738 [2024-05-12 04:58:38.693263] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.738 [2024-05-12 04:58:38.693341] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:31.738 [2024-05-12 04:58:38.693388] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.207 ms 00:18:31.738 [2024-05-12 04:58:38.693399] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.738 [2024-05-12 04:58:38.693450] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.738 [2024-05-12 04:58:38.693483] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:31.738 [2024-05-12 04:58:38.693495] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:31.738 [2024-05-12 04:58:38.693506] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.738 [2024-05-12 04:58:38.693558] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:31.738 [2024-05-12 04:58:38.693577] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.738 [2024-05-12 04:58:38.693590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:31.738 [2024-05-12 04:58:38.693603] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:18:31.738 [2024-05-12 04:58:38.693615] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.738 [2024-05-12 04:58:38.723932] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.738 [2024-05-12 04:58:38.723983] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:31.738 [2024-05-12 04:58:38.724002] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.272 ms 00:18:31.738 [2024-05-12 04:58:38.724025] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.738 [2024-05-12 04:58:38.724158] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.738 [2024-05-12 04:58:38.724180] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:31.738 [2024-05-12 04:58:38.724194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:18:31.738 [2024-05-12 04:58:38.724205] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.738 [2024-05-12 04:58:38.725385] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:31.738 [2024-05-12 04:58:38.729553] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 326.222 ms, result 0 00:18:31.738 [2024-05-12 04:58:38.730517] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:31.738 [2024-05-12 04:58:38.746454] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:41.980  Copying: 23/256 [MB] (23 MBps) Copying: 49/256 [MB] (25 MBps) Copying: 74/256 [MB] (25 MBps) Copying: 99/256 [MB] (25 MBps) Copying: 124/256 [MB] (25 MBps) Copying: 150/256 [MB] (25 MBps) Copying: 176/256 [MB] (25 MBps) Copying: 201/256 [MB] (25 MBps) Copying: 227/256 [MB] (26 MBps) Copying: 253/256 [MB] (25 MBps) Copying: 256/256 [MB] (average 25 MBps)[2024-05-12 04:58:48.847732] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:41.980 [2024-05-12 04:58:48.860041] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.980 [2024-05-12 04:58:48.860206] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:41.980 [2024-05-12 04:58:48.860364] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:41.980 [2024-05-12 04:58:48.860418] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.980 [2024-05-12 04:58:48.860597] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:41.980 [2024-05-12 04:58:48.863912] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.980 [2024-05-12 04:58:48.864061] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:41.980 [2024-05-12 04:58:48.864179] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.237 ms 00:18:41.980 [2024-05-12 04:58:48.864340] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.980 [2024-05-12 04:58:48.865955] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.980 [2024-05-12 04:58:48.866116] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:41.980 [2024-05-12 04:58:48.866254] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.516 ms 00:18:41.980 [2024-05-12 04:58:48.866406] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.980 [2024-05-12 04:58:48.873409] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.980 [2024-05-12 04:58:48.873557] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:41.980 [2024-05-12 04:58:48.873677] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.931 ms 00:18:41.980 [2024-05-12 04:58:48.873727] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.980 [2024-05-12 04:58:48.881434] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.980 [2024-05-12 04:58:48.881589] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:18:41.980 [2024-05-12 04:58:48.881699] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.503 ms 00:18:41.980 [2024-05-12 04:58:48.881749] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.980 [2024-05-12 04:58:48.912542] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.980 [2024-05-12 04:58:48.912695] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:41.980 [2024-05-12 04:58:48.912827] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.700 ms 00:18:41.980 [2024-05-12 04:58:48.912851] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.980 [2024-05-12 04:58:48.930684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.980 [2024-05-12 04:58:48.930727] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:41.980 [2024-05-12 04:58:48.930745] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.747 ms 00:18:41.980 [2024-05-12 04:58:48.930763] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.980 [2024-05-12 04:58:48.930940] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.980 [2024-05-12 04:58:48.930961] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:41.980 [2024-05-12 04:58:48.930975] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:18:41.980 [2024-05-12 04:58:48.930986] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.980 [2024-05-12 04:58:48.962384] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.980 [2024-05-12 04:58:48.962426] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:41.980 [2024-05-12 04:58:48.962443] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.374 ms 00:18:41.980 [2024-05-12 04:58:48.962469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.980 [2024-05-12 04:58:48.993298] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.980 [2024-05-12 04:58:48.993342] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:41.980 [2024-05-12 04:58:48.993360] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.748 ms 00:18:41.980 [2024-05-12 04:58:48.993371] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.980 [2024-05-12 04:58:49.023915] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.980 [2024-05-12 04:58:49.023957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:41.980 [2024-05-12 04:58:49.023974] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.464 ms 00:18:41.980 [2024-05-12 04:58:49.023986] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.980 [2024-05-12 04:58:49.054575] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.980 [2024-05-12 04:58:49.054617] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:41.980 [2024-05-12 04:58:49.054635] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.478 ms 00:18:41.980 [2024-05-12 04:58:49.054646] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.980 [2024-05-12 04:58:49.054725] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:41.980 [2024-05-12 04:58:49.054757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:41.980 [2024-05-12 04:58:49.054772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:41.980 [2024-05-12 04:58:49.054784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:41.980 [2024-05-12 04:58:49.054795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:41.980 [2024-05-12 04:58:49.054807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:41.980 [2024-05-12 04:58:49.054819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:41.980 [2024-05-12 04:58:49.054830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:41.980 [2024-05-12 04:58:49.054841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:41.980 [2024-05-12 04:58:49.054853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.054865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.054876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.054887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.054899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.054910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.054922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.054933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.054944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.054956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.054967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.054978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.054990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:41.981 [2024-05-12 04:58:49.055849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:41.982 [2024-05-12 04:58:49.055860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:41.982 [2024-05-12 04:58:49.055871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:41.982 [2024-05-12 04:58:49.055882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:41.982 [2024-05-12 04:58:49.055904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:41.982 [2024-05-12 04:58:49.055918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:41.982 [2024-05-12 04:58:49.055929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:41.982 [2024-05-12 04:58:49.055940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:41.982 [2024-05-12 04:58:49.055961] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:41.982 [2024-05-12 04:58:49.055972] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c7a363c9-0ae1-4712-9c4d-0d43e1b35de1 00:18:41.982 [2024-05-12 04:58:49.055998] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:41.982 [2024-05-12 04:58:49.056009] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:41.982 [2024-05-12 04:58:49.056019] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:41.982 [2024-05-12 04:58:49.056030] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:41.982 [2024-05-12 04:58:49.056040] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:41.982 [2024-05-12 04:58:49.056052] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:41.982 [2024-05-12 04:58:49.056063] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:41.982 [2024-05-12 04:58:49.056072] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:41.982 [2024-05-12 04:58:49.056082] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:41.982 [2024-05-12 04:58:49.056093] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.982 [2024-05-12 04:58:49.056104] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:41.982 [2024-05-12 04:58:49.056121] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.370 ms 00:18:41.982 [2024-05-12 04:58:49.056132] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.982 [2024-05-12 04:58:49.072503] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.982 [2024-05-12 04:58:49.072541] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:41.982 [2024-05-12 04:58:49.072557] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.344 ms 00:18:41.982 [2024-05-12 04:58:49.072569] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.982 [2024-05-12 04:58:49.072837] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.982 [2024-05-12 04:58:49.072862] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:41.982 [2024-05-12 04:58:49.072875] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:18:41.982 [2024-05-12 04:58:49.072885] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.241 [2024-05-12 04:58:49.124861] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.241 [2024-05-12 04:58:49.124923] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:42.241 [2024-05-12 04:58:49.124943] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.241 [2024-05-12 04:58:49.124955] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.241 [2024-05-12 04:58:49.125082] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.241 [2024-05-12 04:58:49.125107] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:42.241 [2024-05-12 04:58:49.125120] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.241 [2024-05-12 04:58:49.125132] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.241 [2024-05-12 04:58:49.125200] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.241 [2024-05-12 04:58:49.125240] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:42.241 [2024-05-12 04:58:49.125255] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.241 [2024-05-12 04:58:49.125267] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.241 [2024-05-12 04:58:49.125294] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.241 [2024-05-12 04:58:49.125308] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:42.241 [2024-05-12 04:58:49.125325] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.241 [2024-05-12 04:58:49.125336] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.241 [2024-05-12 04:58:49.222531] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.241 [2024-05-12 04:58:49.222601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:42.241 [2024-05-12 04:58:49.222620] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.241 [2024-05-12 04:58:49.222632] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.241 [2024-05-12 04:58:49.261359] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.241 [2024-05-12 04:58:49.261411] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:42.241 [2024-05-12 04:58:49.261430] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.241 [2024-05-12 04:58:49.261442] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.241 [2024-05-12 04:58:49.261527] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.241 [2024-05-12 04:58:49.261545] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:42.241 [2024-05-12 04:58:49.261557] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.241 [2024-05-12 04:58:49.261568] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.241 [2024-05-12 04:58:49.261604] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.241 [2024-05-12 04:58:49.261617] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:42.241 [2024-05-12 04:58:49.261628] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.241 [2024-05-12 04:58:49.261645] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.241 [2024-05-12 04:58:49.261764] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.241 [2024-05-12 04:58:49.261783] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:42.241 [2024-05-12 04:58:49.261795] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.241 [2024-05-12 04:58:49.261806] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.241 [2024-05-12 04:58:49.261857] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.241 [2024-05-12 04:58:49.261873] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:42.241 [2024-05-12 04:58:49.261885] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.241 [2024-05-12 04:58:49.261896] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.241 [2024-05-12 04:58:49.261948] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.241 [2024-05-12 04:58:49.261963] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:42.241 [2024-05-12 04:58:49.261974] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.241 [2024-05-12 04:58:49.261985] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.241 [2024-05-12 04:58:49.262040] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.241 [2024-05-12 04:58:49.262058] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:42.241 [2024-05-12 04:58:49.262070] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.241 [2024-05-12 04:58:49.262086] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.241 [2024-05-12 04:58:49.262281] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 402.224 ms, result 0 00:18:43.618 00:18:43.618 00:18:43.618 04:58:50 -- ftl/trim.sh@72 -- # svcpid=73564 00:18:43.618 04:58:50 -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:18:43.618 04:58:50 -- ftl/trim.sh@73 -- # waitforlisten 73564 00:18:43.618 04:58:50 -- common/autotest_common.sh@819 -- # '[' -z 73564 ']' 00:18:43.618 04:58:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:43.618 04:58:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:43.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:43.618 04:58:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:43.618 04:58:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:43.618 04:58:50 -- common/autotest_common.sh@10 -- # set +x 00:18:43.618 [2024-05-12 04:58:50.519267] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:18:43.618 [2024-05-12 04:58:50.519690] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73564 ] 00:18:43.618 [2024-05-12 04:58:50.688833] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.877 [2024-05-12 04:58:50.868761] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:43.877 [2024-05-12 04:58:50.869006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.253 04:58:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:45.253 04:58:52 -- common/autotest_common.sh@852 -- # return 0 00:18:45.253 04:58:52 -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:18:45.253 [2024-05-12 04:58:52.346149] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:45.253 [2024-05-12 04:58:52.346246] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:45.512 [2024-05-12 04:58:52.517332] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.512 [2024-05-12 04:58:52.517435] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:45.512 [2024-05-12 04:58:52.517501] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:18:45.512 [2024-05-12 04:58:52.517528] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.512 [2024-05-12 04:58:52.521936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.512 [2024-05-12 04:58:52.521980] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:45.512 [2024-05-12 04:58:52.522028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.360 ms 00:18:45.512 [2024-05-12 04:58:52.522042] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.512 [2024-05-12 04:58:52.522187] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:45.512 [2024-05-12 04:58:52.523192] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:45.512 [2024-05-12 04:58:52.523307] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.512 [2024-05-12 04:58:52.523323] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:45.512 [2024-05-12 04:58:52.523338] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.126 ms 00:18:45.512 [2024-05-12 04:58:52.523350] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.512 [2024-05-12 04:58:52.524638] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:45.512 [2024-05-12 04:58:52.540342] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.512 [2024-05-12 04:58:52.540408] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:45.512 [2024-05-12 04:58:52.540432] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.712 ms 00:18:45.512 [2024-05-12 04:58:52.540449] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.512 [2024-05-12 04:58:52.540566] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.512 [2024-05-12 04:58:52.540607] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:45.512 [2024-05-12 04:58:52.540621] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:18:45.512 [2024-05-12 04:58:52.540637] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.512 [2024-05-12 04:58:52.545130] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.512 [2024-05-12 04:58:52.545206] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:45.512 [2024-05-12 04:58:52.545269] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.427 ms 00:18:45.512 [2024-05-12 04:58:52.545292] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.512 [2024-05-12 04:58:52.545465] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.512 [2024-05-12 04:58:52.545494] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:45.512 [2024-05-12 04:58:52.545509] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:18:45.512 [2024-05-12 04:58:52.545526] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.512 [2024-05-12 04:58:52.545564] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.512 [2024-05-12 04:58:52.545586] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:45.512 [2024-05-12 04:58:52.545606] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:45.512 [2024-05-12 04:58:52.545623] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.512 [2024-05-12 04:58:52.545668] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:45.512 [2024-05-12 04:58:52.549777] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.512 [2024-05-12 04:58:52.549812] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:45.512 [2024-05-12 04:58:52.549865] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.121 ms 00:18:45.512 [2024-05-12 04:58:52.549878] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.512 [2024-05-12 04:58:52.549959] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.512 [2024-05-12 04:58:52.549977] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:45.512 [2024-05-12 04:58:52.549995] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:45.512 [2024-05-12 04:58:52.550007] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.512 [2024-05-12 04:58:52.550041] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:45.512 [2024-05-12 04:58:52.550073] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:18:45.512 [2024-05-12 04:58:52.550124] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:45.512 [2024-05-12 04:58:52.550146] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:18:45.512 [2024-05-12 04:58:52.550292] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:45.512 [2024-05-12 04:58:52.550313] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:45.512 [2024-05-12 04:58:52.550334] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:45.512 [2024-05-12 04:58:52.550350] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:45.512 [2024-05-12 04:58:52.550369] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:45.512 [2024-05-12 04:58:52.550388] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:45.512 [2024-05-12 04:58:52.550404] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:45.512 [2024-05-12 04:58:52.550416] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:45.512 [2024-05-12 04:58:52.550435] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:45.512 [2024-05-12 04:58:52.550449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.512 [2024-05-12 04:58:52.550465] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:45.512 [2024-05-12 04:58:52.550478] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.417 ms 00:18:45.512 [2024-05-12 04:58:52.550494] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.512 [2024-05-12 04:58:52.550610] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.512 [2024-05-12 04:58:52.550632] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:45.512 [2024-05-12 04:58:52.550646] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:45.512 [2024-05-12 04:58:52.550670] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.512 [2024-05-12 04:58:52.550760] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:45.512 [2024-05-12 04:58:52.550790] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:45.512 [2024-05-12 04:58:52.550805] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:45.512 [2024-05-12 04:58:52.550823] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:45.512 [2024-05-12 04:58:52.550836] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:45.512 [2024-05-12 04:58:52.550854] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:45.512 [2024-05-12 04:58:52.550866] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:45.512 [2024-05-12 04:58:52.550886] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:45.512 [2024-05-12 04:58:52.550899] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:45.512 [2024-05-12 04:58:52.550915] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:45.512 [2024-05-12 04:58:52.550927] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:45.512 [2024-05-12 04:58:52.550943] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:45.512 [2024-05-12 04:58:52.550954] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:45.512 [2024-05-12 04:58:52.550971] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:45.512 [2024-05-12 04:58:52.550983] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:18:45.512 [2024-05-12 04:58:52.550999] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:45.512 [2024-05-12 04:58:52.551012] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:45.512 [2024-05-12 04:58:52.551029] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:18:45.512 [2024-05-12 04:58:52.551041] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:45.512 [2024-05-12 04:58:52.551057] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:45.512 [2024-05-12 04:58:52.551069] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:18:45.512 [2024-05-12 04:58:52.551085] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:45.512 [2024-05-12 04:58:52.551097] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:45.512 [2024-05-12 04:58:52.551117] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:45.512 [2024-05-12 04:58:52.551130] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:45.512 [2024-05-12 04:58:52.551146] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:45.512 [2024-05-12 04:58:52.551158] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:18:45.512 [2024-05-12 04:58:52.551174] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:45.512 [2024-05-12 04:58:52.551186] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:45.512 [2024-05-12 04:58:52.551227] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:45.512 [2024-05-12 04:58:52.551256] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:45.512 [2024-05-12 04:58:52.551288] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:45.512 [2024-05-12 04:58:52.551300] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:18:45.512 [2024-05-12 04:58:52.551315] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:45.512 [2024-05-12 04:58:52.551326] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:45.512 [2024-05-12 04:58:52.551341] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:45.512 [2024-05-12 04:58:52.551353] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:45.512 [2024-05-12 04:58:52.551368] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:45.512 [2024-05-12 04:58:52.551380] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:18:45.512 [2024-05-12 04:58:52.551399] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:45.512 [2024-05-12 04:58:52.551410] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:45.512 [2024-05-12 04:58:52.551427] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:45.512 [2024-05-12 04:58:52.551438] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:45.512 [2024-05-12 04:58:52.551454] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:45.512 [2024-05-12 04:58:52.551471] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:45.512 [2024-05-12 04:58:52.551488] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:45.512 [2024-05-12 04:58:52.551499] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:45.513 [2024-05-12 04:58:52.551515] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:45.513 [2024-05-12 04:58:52.551526] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:45.513 [2024-05-12 04:58:52.551541] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:45.513 [2024-05-12 04:58:52.551554] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:45.513 [2024-05-12 04:58:52.551573] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:45.513 [2024-05-12 04:58:52.551588] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:45.513 [2024-05-12 04:58:52.551604] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:18:45.513 [2024-05-12 04:58:52.551617] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:18:45.513 [2024-05-12 04:58:52.551639] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:18:45.513 [2024-05-12 04:58:52.551651] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:18:45.513 [2024-05-12 04:58:52.551667] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:18:45.513 [2024-05-12 04:58:52.551679] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:18:45.513 [2024-05-12 04:58:52.551695] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:18:45.513 [2024-05-12 04:58:52.551708] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:18:45.513 [2024-05-12 04:58:52.551724] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:18:45.513 [2024-05-12 04:58:52.551737] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:18:45.513 [2024-05-12 04:58:52.551753] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:18:45.513 [2024-05-12 04:58:52.551766] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:18:45.513 [2024-05-12 04:58:52.551781] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:45.513 [2024-05-12 04:58:52.551796] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:45.513 [2024-05-12 04:58:52.551813] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:45.513 [2024-05-12 04:58:52.551826] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:45.513 [2024-05-12 04:58:52.551842] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:45.513 [2024-05-12 04:58:52.551871] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:45.513 [2024-05-12 04:58:52.551919] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.513 [2024-05-12 04:58:52.551934] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:45.513 [2024-05-12 04:58:52.551951] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.168 ms 00:18:45.513 [2024-05-12 04:58:52.551964] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.513 [2024-05-12 04:58:52.571189] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.513 [2024-05-12 04:58:52.571432] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:45.513 [2024-05-12 04:58:52.571553] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.154 ms 00:18:45.513 [2024-05-12 04:58:52.571658] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.513 [2024-05-12 04:58:52.571869] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.513 [2024-05-12 04:58:52.572007] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:45.513 [2024-05-12 04:58:52.572154] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:18:45.513 [2024-05-12 04:58:52.572209] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.513 [2024-05-12 04:58:52.611402] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.513 [2024-05-12 04:58:52.611637] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:45.513 [2024-05-12 04:58:52.611799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.089 ms 00:18:45.513 [2024-05-12 04:58:52.611870] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.513 [2024-05-12 04:58:52.612098] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.513 [2024-05-12 04:58:52.612155] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:45.513 [2024-05-12 04:58:52.612326] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:45.513 [2024-05-12 04:58:52.612350] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.513 [2024-05-12 04:58:52.612695] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.513 [2024-05-12 04:58:52.612723] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:45.513 [2024-05-12 04:58:52.612759] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:18:45.513 [2024-05-12 04:58:52.612772] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.513 [2024-05-12 04:58:52.612938] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.513 [2024-05-12 04:58:52.612957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:45.513 [2024-05-12 04:58:52.612975] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:18:45.513 [2024-05-12 04:58:52.612988] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.513 [2024-05-12 04:58:52.631469] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.513 [2024-05-12 04:58:52.631669] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:45.513 [2024-05-12 04:58:52.631808] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.447 ms 00:18:45.513 [2024-05-12 04:58:52.631863] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.772 [2024-05-12 04:58:52.648648] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:45.772 [2024-05-12 04:58:52.648853] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:45.772 [2024-05-12 04:58:52.648995] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.772 [2024-05-12 04:58:52.649110] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:45.772 [2024-05-12 04:58:52.649173] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.825 ms 00:18:45.772 [2024-05-12 04:58:52.649308] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.772 [2024-05-12 04:58:52.677351] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.772 [2024-05-12 04:58:52.677391] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:45.772 [2024-05-12 04:58:52.677438] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.930 ms 00:18:45.772 [2024-05-12 04:58:52.677451] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.772 [2024-05-12 04:58:52.692497] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.772 [2024-05-12 04:58:52.692533] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:45.772 [2024-05-12 04:58:52.692571] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.954 ms 00:18:45.772 [2024-05-12 04:58:52.692583] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.772 [2024-05-12 04:58:52.707290] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.772 [2024-05-12 04:58:52.707326] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:45.772 [2024-05-12 04:58:52.707368] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.617 ms 00:18:45.772 [2024-05-12 04:58:52.707380] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.772 [2024-05-12 04:58:52.707835] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.772 [2024-05-12 04:58:52.707862] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:45.772 [2024-05-12 04:58:52.707878] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.334 ms 00:18:45.772 [2024-05-12 04:58:52.707916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.772 [2024-05-12 04:58:52.790360] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.772 [2024-05-12 04:58:52.790420] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:45.772 [2024-05-12 04:58:52.790448] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.383 ms 00:18:45.772 [2024-05-12 04:58:52.790462] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.772 [2024-05-12 04:58:52.803574] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:45.772 [2024-05-12 04:58:52.817468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.772 [2024-05-12 04:58:52.817563] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:45.772 [2024-05-12 04:58:52.817585] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.859 ms 00:18:45.772 [2024-05-12 04:58:52.817603] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.772 [2024-05-12 04:58:52.817735] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.772 [2024-05-12 04:58:52.817765] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:45.772 [2024-05-12 04:58:52.817779] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:45.772 [2024-05-12 04:58:52.817795] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.772 [2024-05-12 04:58:52.817854] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.772 [2024-05-12 04:58:52.817884] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:45.772 [2024-05-12 04:58:52.817897] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:18:45.772 [2024-05-12 04:58:52.817913] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.772 [2024-05-12 04:58:52.821154] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.772 [2024-05-12 04:58:52.821250] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:45.772 [2024-05-12 04:58:52.821294] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.211 ms 00:18:45.772 [2024-05-12 04:58:52.821326] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.772 [2024-05-12 04:58:52.821387] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.772 [2024-05-12 04:58:52.821424] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:45.772 [2024-05-12 04:58:52.821447] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:45.772 [2024-05-12 04:58:52.821472] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.772 [2024-05-12 04:58:52.821529] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:45.772 [2024-05-12 04:58:52.821561] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.772 [2024-05-12 04:58:52.821574] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:45.772 [2024-05-12 04:58:52.821591] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:18:45.772 [2024-05-12 04:58:52.821604] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.772 [2024-05-12 04:58:52.852451] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.772 [2024-05-12 04:58:52.852499] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:45.772 [2024-05-12 04:58:52.852540] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.803 ms 00:18:45.772 [2024-05-12 04:58:52.852554] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.772 [2024-05-12 04:58:52.852701] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.772 [2024-05-12 04:58:52.852721] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:45.772 [2024-05-12 04:58:52.852740] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:45.772 [2024-05-12 04:58:52.852753] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.772 [2024-05-12 04:58:52.853873] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:45.772 [2024-05-12 04:58:52.857991] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 336.191 ms, result 0 00:18:45.772 [2024-05-12 04:58:52.859057] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:45.772 Some configs were skipped because the RPC state that can call them passed over. 00:18:46.031 04:58:52 -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:18:46.289 [2024-05-12 04:58:53.178636] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.289 [2024-05-12 04:58:53.178854] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:18:46.289 [2024-05-12 04:58:53.179012] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.431 ms 00:18:46.289 [2024-05-12 04:58:53.179080] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.289 [2024-05-12 04:58:53.179272] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 32.059 ms, result 0 00:18:46.289 true 00:18:46.289 04:58:53 -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:18:46.548 [2024-05-12 04:58:53.472343] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.548 [2024-05-12 04:58:53.472393] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:18:46.548 [2024-05-12 04:58:53.472436] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.198 ms 00:18:46.548 [2024-05-12 04:58:53.472450] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.548 [2024-05-12 04:58:53.472509] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 29.366 ms, result 0 00:18:46.548 true 00:18:46.548 04:58:53 -- ftl/trim.sh@81 -- # killprocess 73564 00:18:46.548 04:58:53 -- common/autotest_common.sh@926 -- # '[' -z 73564 ']' 00:18:46.548 04:58:53 -- common/autotest_common.sh@930 -- # kill -0 73564 00:18:46.548 04:58:53 -- common/autotest_common.sh@931 -- # uname 00:18:46.548 04:58:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:46.548 04:58:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73564 00:18:46.548 killing process with pid 73564 00:18:46.548 04:58:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:46.548 04:58:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:46.548 04:58:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73564' 00:18:46.548 04:58:53 -- common/autotest_common.sh@945 -- # kill 73564 00:18:46.548 04:58:53 -- common/autotest_common.sh@950 -- # wait 73564 00:18:47.486 [2024-05-12 04:58:54.425692] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.486 [2024-05-12 04:58:54.425775] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:47.486 [2024-05-12 04:58:54.425797] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:47.486 [2024-05-12 04:58:54.425811] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.486 [2024-05-12 04:58:54.425842] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:47.486 [2024-05-12 04:58:54.429212] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.486 [2024-05-12 04:58:54.429255] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:47.486 [2024-05-12 04:58:54.429277] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.345 ms 00:18:47.486 [2024-05-12 04:58:54.429292] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.486 [2024-05-12 04:58:54.429645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.486 [2024-05-12 04:58:54.429663] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:47.486 [2024-05-12 04:58:54.429678] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:18:47.486 [2024-05-12 04:58:54.429690] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.486 [2024-05-12 04:58:54.433883] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.486 [2024-05-12 04:58:54.433926] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:47.486 [2024-05-12 04:58:54.433945] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.164 ms 00:18:47.486 [2024-05-12 04:58:54.433958] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.487 [2024-05-12 04:58:54.441404] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.487 [2024-05-12 04:58:54.441436] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:18:47.487 [2024-05-12 04:58:54.441470] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.392 ms 00:18:47.487 [2024-05-12 04:58:54.441482] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.487 [2024-05-12 04:58:54.453969] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.487 [2024-05-12 04:58:54.454006] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:47.487 [2024-05-12 04:58:54.454055] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.420 ms 00:18:47.487 [2024-05-12 04:58:54.454067] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.487 [2024-05-12 04:58:54.462927] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.487 [2024-05-12 04:58:54.462967] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:47.487 [2024-05-12 04:58:54.462987] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.810 ms 00:18:47.487 [2024-05-12 04:58:54.463002] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.487 [2024-05-12 04:58:54.463158] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.487 [2024-05-12 04:58:54.463178] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:47.487 [2024-05-12 04:58:54.463192] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:18:47.487 [2024-05-12 04:58:54.463218] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.487 [2024-05-12 04:58:54.476112] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.487 [2024-05-12 04:58:54.476150] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:47.487 [2024-05-12 04:58:54.476185] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.812 ms 00:18:47.487 [2024-05-12 04:58:54.476196] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.487 [2024-05-12 04:58:54.492416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.487 [2024-05-12 04:58:54.492473] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:47.487 [2024-05-12 04:58:54.492526] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.137 ms 00:18:47.487 [2024-05-12 04:58:54.492546] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.487 [2024-05-12 04:58:54.504745] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.487 [2024-05-12 04:58:54.504782] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:47.487 [2024-05-12 04:58:54.504816] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.105 ms 00:18:47.487 [2024-05-12 04:58:54.504827] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.487 [2024-05-12 04:58:54.517729] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.487 [2024-05-12 04:58:54.517765] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:47.487 [2024-05-12 04:58:54.517799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.806 ms 00:18:47.487 [2024-05-12 04:58:54.517809] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.487 [2024-05-12 04:58:54.517867] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:47.487 [2024-05-12 04:58:54.517890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.517930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.517944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.517960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.517973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.517994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.518007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.518023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.518036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.518052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.518065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.518082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.518094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.518110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.518123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.518141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.518154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.518170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.518182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.518199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.518211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.518352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.518368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.518388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.518402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.518419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.518432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.518450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.518463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.518480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.518494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.518511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.518524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.518541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.518554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.518572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.518585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.518607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.518620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.518637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.518651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.518670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.518683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.518700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.518713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.518730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.518743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.518760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.518773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.518799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.518811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.518828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.518841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.518862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.518875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.518892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:47.487 [2024-05-12 04:58:54.518905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:47.488 [2024-05-12 04:58:54.518922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:47.488 [2024-05-12 04:58:54.518935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:47.488 [2024-05-12 04:58:54.518952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:47.488 [2024-05-12 04:58:54.518965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:47.488 [2024-05-12 04:58:54.518982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:47.488 [2024-05-12 04:58:54.518995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:47.488 [2024-05-12 04:58:54.519012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:47.488 [2024-05-12 04:58:54.519026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:47.488 [2024-05-12 04:58:54.519043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:47.488 [2024-05-12 04:58:54.519057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:47.488 [2024-05-12 04:58:54.519073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:47.488 [2024-05-12 04:58:54.519088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:47.488 [2024-05-12 04:58:54.519110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:47.488 [2024-05-12 04:58:54.519124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:47.488 [2024-05-12 04:58:54.519141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:47.488 [2024-05-12 04:58:54.519154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:47.488 [2024-05-12 04:58:54.519170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:47.488 [2024-05-12 04:58:54.519183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:47.488 [2024-05-12 04:58:54.519201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:47.488 [2024-05-12 04:58:54.519240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:47.488 [2024-05-12 04:58:54.519260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:47.488 [2024-05-12 04:58:54.519273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:47.488 [2024-05-12 04:58:54.519289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:47.488 [2024-05-12 04:58:54.519307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:47.488 [2024-05-12 04:58:54.519324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:47.488 [2024-05-12 04:58:54.519337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:47.488 [2024-05-12 04:58:54.519353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:47.488 [2024-05-12 04:58:54.519365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:47.488 [2024-05-12 04:58:54.519386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:47.488 [2024-05-12 04:58:54.519399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:47.488 [2024-05-12 04:58:54.519415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:47.488 [2024-05-12 04:58:54.519428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:47.488 [2024-05-12 04:58:54.519444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:47.488 [2024-05-12 04:58:54.519457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:47.488 [2024-05-12 04:58:54.519474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:47.488 [2024-05-12 04:58:54.519486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:47.488 [2024-05-12 04:58:54.519502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:47.488 [2024-05-12 04:58:54.519515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:47.488 [2024-05-12 04:58:54.519533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:47.488 [2024-05-12 04:58:54.519545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:47.488 [2024-05-12 04:58:54.519562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:47.488 [2024-05-12 04:58:54.519575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:47.488 [2024-05-12 04:58:54.519592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:47.488 [2024-05-12 04:58:54.519613] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:47.488 [2024-05-12 04:58:54.519651] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c7a363c9-0ae1-4712-9c4d-0d43e1b35de1 00:18:47.488 [2024-05-12 04:58:54.519665] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:47.488 [2024-05-12 04:58:54.519687] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:47.488 [2024-05-12 04:58:54.519699] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:47.488 [2024-05-12 04:58:54.519714] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:47.488 [2024-05-12 04:58:54.519726] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:47.488 [2024-05-12 04:58:54.519742] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:47.488 [2024-05-12 04:58:54.519754] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:47.488 [2024-05-12 04:58:54.519769] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:47.488 [2024-05-12 04:58:54.519780] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:47.488 [2024-05-12 04:58:54.519797] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.488 [2024-05-12 04:58:54.519810] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:47.488 [2024-05-12 04:58:54.519827] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.929 ms 00:18:47.488 [2024-05-12 04:58:54.519839] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.488 [2024-05-12 04:58:54.536073] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.488 [2024-05-12 04:58:54.536113] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:47.488 [2024-05-12 04:58:54.536142] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.180 ms 00:18:47.488 [2024-05-12 04:58:54.536155] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.488 [2024-05-12 04:58:54.536485] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.488 [2024-05-12 04:58:54.536505] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:47.488 [2024-05-12 04:58:54.536524] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:18:47.488 [2024-05-12 04:58:54.536537] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.488 [2024-05-12 04:58:54.594475] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.488 [2024-05-12 04:58:54.594525] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:47.488 [2024-05-12 04:58:54.594564] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.488 [2024-05-12 04:58:54.594577] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.488 [2024-05-12 04:58:54.594699] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.488 [2024-05-12 04:58:54.594716] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:47.488 [2024-05-12 04:58:54.594733] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.488 [2024-05-12 04:58:54.594744] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.488 [2024-05-12 04:58:54.594815] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.488 [2024-05-12 04:58:54.594832] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:47.488 [2024-05-12 04:58:54.594869] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.488 [2024-05-12 04:58:54.594882] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.488 [2024-05-12 04:58:54.594912] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.488 [2024-05-12 04:58:54.594926] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:47.488 [2024-05-12 04:58:54.594942] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.488 [2024-05-12 04:58:54.594955] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.747 [2024-05-12 04:58:54.692447] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.747 [2024-05-12 04:58:54.692509] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:47.747 [2024-05-12 04:58:54.692546] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.747 [2024-05-12 04:58:54.692557] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.747 [2024-05-12 04:58:54.729377] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.747 [2024-05-12 04:58:54.729414] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:47.747 [2024-05-12 04:58:54.729449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.747 [2024-05-12 04:58:54.729460] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.747 [2024-05-12 04:58:54.729549] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.747 [2024-05-12 04:58:54.729568] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:47.747 [2024-05-12 04:58:54.729584] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.747 [2024-05-12 04:58:54.729595] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.747 [2024-05-12 04:58:54.729631] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.747 [2024-05-12 04:58:54.729644] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:47.747 [2024-05-12 04:58:54.729657] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.747 [2024-05-12 04:58:54.729667] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.747 [2024-05-12 04:58:54.729789] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.747 [2024-05-12 04:58:54.729822] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:47.747 [2024-05-12 04:58:54.729839] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.747 [2024-05-12 04:58:54.729866] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.747 [2024-05-12 04:58:54.729922] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.747 [2024-05-12 04:58:54.729938] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:47.747 [2024-05-12 04:58:54.729952] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.747 [2024-05-12 04:58:54.729963] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.747 [2024-05-12 04:58:54.730008] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.747 [2024-05-12 04:58:54.730022] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:47.747 [2024-05-12 04:58:54.730040] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.747 [2024-05-12 04:58:54.730051] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.747 [2024-05-12 04:58:54.730107] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.747 [2024-05-12 04:58:54.730138] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:47.747 [2024-05-12 04:58:54.730152] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.747 [2024-05-12 04:58:54.730163] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.747 [2024-05-12 04:58:54.730380] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 304.623 ms, result 0 00:18:48.726 04:58:55 -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:48.726 04:58:55 -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:48.985 [2024-05-12 04:58:55.861055] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:18:48.985 [2024-05-12 04:58:55.861248] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73635 ] 00:18:48.985 [2024-05-12 04:58:56.026296] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.245 [2024-05-12 04:58:56.175192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.505 [2024-05-12 04:58:56.452833] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:49.505 [2024-05-12 04:58:56.452989] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:49.505 [2024-05-12 04:58:56.619975] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.505 [2024-05-12 04:58:56.620028] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:49.505 [2024-05-12 04:58:56.620064] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:49.505 [2024-05-12 04:58:56.620080] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.505 [2024-05-12 04:58:56.623189] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.505 [2024-05-12 04:58:56.623261] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:49.505 [2024-05-12 04:58:56.623295] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.081 ms 00:18:49.505 [2024-05-12 04:58:56.623306] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.505 [2024-05-12 04:58:56.623458] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:49.505 [2024-05-12 04:58:56.624445] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:49.505 [2024-05-12 04:58:56.624485] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.505 [2024-05-12 04:58:56.624534] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:49.505 [2024-05-12 04:58:56.624546] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.037 ms 00:18:49.505 [2024-05-12 04:58:56.624557] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.505 [2024-05-12 04:58:56.625791] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:49.766 [2024-05-12 04:58:56.641847] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.766 [2024-05-12 04:58:56.641887] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:49.766 [2024-05-12 04:58:56.641919] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.072 ms 00:18:49.766 [2024-05-12 04:58:56.641930] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.766 [2024-05-12 04:58:56.642031] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.766 [2024-05-12 04:58:56.642050] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:49.766 [2024-05-12 04:58:56.642065] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:18:49.766 [2024-05-12 04:58:56.642075] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.766 [2024-05-12 04:58:56.646474] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.766 [2024-05-12 04:58:56.646512] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:49.766 [2024-05-12 04:58:56.646543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.347 ms 00:18:49.766 [2024-05-12 04:58:56.646554] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.766 [2024-05-12 04:58:56.646693] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.766 [2024-05-12 04:58:56.646715] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:49.766 [2024-05-12 04:58:56.646727] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:18:49.766 [2024-05-12 04:58:56.646737] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.766 [2024-05-12 04:58:56.646773] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.766 [2024-05-12 04:58:56.646787] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:49.766 [2024-05-12 04:58:56.646798] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:49.766 [2024-05-12 04:58:56.646808] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.766 [2024-05-12 04:58:56.646839] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:49.766 [2024-05-12 04:58:56.650933] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.766 [2024-05-12 04:58:56.650967] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:49.766 [2024-05-12 04:58:56.650998] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.105 ms 00:18:49.766 [2024-05-12 04:58:56.651009] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.766 [2024-05-12 04:58:56.651070] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.766 [2024-05-12 04:58:56.651090] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:49.766 [2024-05-12 04:58:56.651101] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:49.766 [2024-05-12 04:58:56.651112] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.766 [2024-05-12 04:58:56.651135] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:49.766 [2024-05-12 04:58:56.651158] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:18:49.766 [2024-05-12 04:58:56.651193] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:49.766 [2024-05-12 04:58:56.651212] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:18:49.766 [2024-05-12 04:58:56.651332] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:49.766 [2024-05-12 04:58:56.651351] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:49.766 [2024-05-12 04:58:56.651365] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:49.766 [2024-05-12 04:58:56.651379] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:49.766 [2024-05-12 04:58:56.651392] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:49.766 [2024-05-12 04:58:56.651404] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:49.766 [2024-05-12 04:58:56.651414] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:49.766 [2024-05-12 04:58:56.651423] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:49.766 [2024-05-12 04:58:56.651434] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:49.766 [2024-05-12 04:58:56.651444] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.766 [2024-05-12 04:58:56.651454] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:49.766 [2024-05-12 04:58:56.651470] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:18:49.766 [2024-05-12 04:58:56.651481] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.766 [2024-05-12 04:58:56.651555] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.766 [2024-05-12 04:58:56.651570] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:49.766 [2024-05-12 04:58:56.651581] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:18:49.766 [2024-05-12 04:58:56.651591] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.766 [2024-05-12 04:58:56.651683] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:49.766 [2024-05-12 04:58:56.651698] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:49.766 [2024-05-12 04:58:56.651709] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:49.766 [2024-05-12 04:58:56.651723] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:49.766 [2024-05-12 04:58:56.651734] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:49.766 [2024-05-12 04:58:56.651744] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:49.766 [2024-05-12 04:58:56.651753] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:49.766 [2024-05-12 04:58:56.651764] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:49.766 [2024-05-12 04:58:56.651773] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:49.766 [2024-05-12 04:58:56.651782] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:49.766 [2024-05-12 04:58:56.651791] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:49.766 [2024-05-12 04:58:56.651801] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:49.766 [2024-05-12 04:58:56.651810] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:49.766 [2024-05-12 04:58:56.651820] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:49.766 [2024-05-12 04:58:56.651829] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:18:49.766 [2024-05-12 04:58:56.651840] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:49.766 [2024-05-12 04:58:56.651849] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:49.766 [2024-05-12 04:58:56.651859] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:18:49.766 [2024-05-12 04:58:56.651868] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:49.766 [2024-05-12 04:58:56.651914] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:49.766 [2024-05-12 04:58:56.651943] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:18:49.766 [2024-05-12 04:58:56.651954] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:49.766 [2024-05-12 04:58:56.651965] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:49.766 [2024-05-12 04:58:56.651975] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:49.766 [2024-05-12 04:58:56.651986] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:49.766 [2024-05-12 04:58:56.651996] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:49.766 [2024-05-12 04:58:56.652007] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:18:49.766 [2024-05-12 04:58:56.652017] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:49.766 [2024-05-12 04:58:56.652027] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:49.766 [2024-05-12 04:58:56.652038] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:49.766 [2024-05-12 04:58:56.652048] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:49.766 [2024-05-12 04:58:56.652059] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:49.766 [2024-05-12 04:58:56.652069] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:18:49.766 [2024-05-12 04:58:56.652079] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:49.766 [2024-05-12 04:58:56.652090] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:49.766 [2024-05-12 04:58:56.652100] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:49.766 [2024-05-12 04:58:56.652110] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:49.766 [2024-05-12 04:58:56.652121] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:49.767 [2024-05-12 04:58:56.652131] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:18:49.767 [2024-05-12 04:58:56.652142] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:49.767 [2024-05-12 04:58:56.652152] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:49.767 [2024-05-12 04:58:56.652163] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:49.767 [2024-05-12 04:58:56.652174] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:49.767 [2024-05-12 04:58:56.652186] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:49.767 [2024-05-12 04:58:56.652198] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:49.767 [2024-05-12 04:58:56.652209] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:49.767 [2024-05-12 04:58:56.652230] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:49.767 [2024-05-12 04:58:56.652241] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:49.767 [2024-05-12 04:58:56.652302] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:49.767 [2024-05-12 04:58:56.652314] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:49.767 [2024-05-12 04:58:56.652325] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:49.767 [2024-05-12 04:58:56.652343] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:49.767 [2024-05-12 04:58:56.652355] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:49.767 [2024-05-12 04:58:56.652382] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:18:49.767 [2024-05-12 04:58:56.652393] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:18:49.767 [2024-05-12 04:58:56.652403] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:18:49.767 [2024-05-12 04:58:56.652414] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:18:49.767 [2024-05-12 04:58:56.652425] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:18:49.767 [2024-05-12 04:58:56.652435] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:18:49.767 [2024-05-12 04:58:56.652446] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:18:49.767 [2024-05-12 04:58:56.652465] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:18:49.767 [2024-05-12 04:58:56.652476] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:18:49.767 [2024-05-12 04:58:56.652486] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:18:49.767 [2024-05-12 04:58:56.652497] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:18:49.767 [2024-05-12 04:58:56.652508] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:18:49.767 [2024-05-12 04:58:56.652518] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:49.767 [2024-05-12 04:58:56.652530] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:49.767 [2024-05-12 04:58:56.652542] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:49.767 [2024-05-12 04:58:56.652552] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:49.767 [2024-05-12 04:58:56.652563] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:49.767 [2024-05-12 04:58:56.652574] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:49.767 [2024-05-12 04:58:56.652586] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.767 [2024-05-12 04:58:56.652602] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:49.767 [2024-05-12 04:58:56.652613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.941 ms 00:18:49.767 [2024-05-12 04:58:56.652623] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.767 [2024-05-12 04:58:56.669586] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.767 [2024-05-12 04:58:56.669629] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:49.767 [2024-05-12 04:58:56.669662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.854 ms 00:18:49.767 [2024-05-12 04:58:56.669673] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.767 [2024-05-12 04:58:56.669810] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.767 [2024-05-12 04:58:56.669827] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:49.767 [2024-05-12 04:58:56.669839] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:18:49.767 [2024-05-12 04:58:56.669866] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.767 [2024-05-12 04:58:56.724419] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.767 [2024-05-12 04:58:56.724470] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:49.767 [2024-05-12 04:58:56.724504] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.524 ms 00:18:49.767 [2024-05-12 04:58:56.724516] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.767 [2024-05-12 04:58:56.724646] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.767 [2024-05-12 04:58:56.724663] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:49.767 [2024-05-12 04:58:56.724676] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:49.767 [2024-05-12 04:58:56.724686] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.767 [2024-05-12 04:58:56.725011] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.767 [2024-05-12 04:58:56.725027] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:49.767 [2024-05-12 04:58:56.725039] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:18:49.767 [2024-05-12 04:58:56.725050] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.767 [2024-05-12 04:58:56.725190] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.767 [2024-05-12 04:58:56.725223] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:49.767 [2024-05-12 04:58:56.725234] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:18:49.767 [2024-05-12 04:58:56.725244] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.767 [2024-05-12 04:58:56.741485] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.767 [2024-05-12 04:58:56.741525] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:49.767 [2024-05-12 04:58:56.741556] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.215 ms 00:18:49.767 [2024-05-12 04:58:56.741567] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.767 [2024-05-12 04:58:56.756753] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:49.767 [2024-05-12 04:58:56.756794] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:49.767 [2024-05-12 04:58:56.756827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.767 [2024-05-12 04:58:56.756838] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:49.767 [2024-05-12 04:58:56.756866] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.098 ms 00:18:49.767 [2024-05-12 04:58:56.756877] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.767 [2024-05-12 04:58:56.784481] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.767 [2024-05-12 04:58:56.784534] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:49.767 [2024-05-12 04:58:56.784568] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.519 ms 00:18:49.767 [2024-05-12 04:58:56.784586] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.767 [2024-05-12 04:58:56.799287] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.767 [2024-05-12 04:58:56.799323] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:49.767 [2024-05-12 04:58:56.799354] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.618 ms 00:18:49.767 [2024-05-12 04:58:56.799364] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.767 [2024-05-12 04:58:56.814035] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.767 [2024-05-12 04:58:56.814083] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:49.767 [2024-05-12 04:58:56.814115] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.592 ms 00:18:49.767 [2024-05-12 04:58:56.814125] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.767 [2024-05-12 04:58:56.814653] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.767 [2024-05-12 04:58:56.814690] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:49.767 [2024-05-12 04:58:56.814706] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.356 ms 00:18:49.767 [2024-05-12 04:58:56.814719] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.767 [2024-05-12 04:58:56.890421] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.767 [2024-05-12 04:58:56.890491] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:49.767 [2024-05-12 04:58:56.890526] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.670 ms 00:18:49.767 [2024-05-12 04:58:56.890538] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.026 [2024-05-12 04:58:56.903042] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:50.026 [2024-05-12 04:58:56.916189] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.026 [2024-05-12 04:58:56.916275] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:50.026 [2024-05-12 04:58:56.916311] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.508 ms 00:18:50.026 [2024-05-12 04:58:56.916322] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.026 [2024-05-12 04:58:56.916466] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.026 [2024-05-12 04:58:56.916485] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:50.026 [2024-05-12 04:58:56.916497] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:50.026 [2024-05-12 04:58:56.916508] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.026 [2024-05-12 04:58:56.916578] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.026 [2024-05-12 04:58:56.916599] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:50.026 [2024-05-12 04:58:56.916611] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:50.026 [2024-05-12 04:58:56.916637] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.026 [2024-05-12 04:58:56.918510] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.026 [2024-05-12 04:58:56.918545] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:50.026 [2024-05-12 04:58:56.918575] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.843 ms 00:18:50.026 [2024-05-12 04:58:56.918585] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.026 [2024-05-12 04:58:56.918638] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.026 [2024-05-12 04:58:56.918652] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:50.026 [2024-05-12 04:58:56.918663] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:50.026 [2024-05-12 04:58:56.918678] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.026 [2024-05-12 04:58:56.918718] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:50.026 [2024-05-12 04:58:56.918733] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.026 [2024-05-12 04:58:56.918744] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:50.026 [2024-05-12 04:58:56.918755] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:50.026 [2024-05-12 04:58:56.918766] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.026 [2024-05-12 04:58:56.947684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.026 [2024-05-12 04:58:56.947724] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:50.026 [2024-05-12 04:58:56.947775] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.888 ms 00:18:50.026 [2024-05-12 04:58:56.947786] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.026 [2024-05-12 04:58:56.947939] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.026 [2024-05-12 04:58:56.947959] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:50.027 [2024-05-12 04:58:56.947972] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:18:50.027 [2024-05-12 04:58:56.947983] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.027 [2024-05-12 04:58:56.949132] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:50.027 [2024-05-12 04:58:56.953046] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 328.772 ms, result 0 00:18:50.027 [2024-05-12 04:58:56.953906] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:50.027 [2024-05-12 04:58:56.969981] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:01.278  Copying: 26/256 [MB] (26 MBps) Copying: 49/256 [MB] (23 MBps) Copying: 72/256 [MB] (22 MBps) Copying: 95/256 [MB] (23 MBps) Copying: 118/256 [MB] (22 MBps) Copying: 140/256 [MB] (22 MBps) Copying: 163/256 [MB] (22 MBps) Copying: 185/256 [MB] (22 MBps) Copying: 209/256 [MB] (23 MBps) Copying: 232/256 [MB] (22 MBps) Copying: 254/256 [MB] (22 MBps) Copying: 256/256 [MB] (average 23 MBps)[2024-05-12 04:59:08.015074] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:01.278 [2024-05-12 04:59:08.027476] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.278 [2024-05-12 04:59:08.027520] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:01.278 [2024-05-12 04:59:08.027557] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:01.278 [2024-05-12 04:59:08.027578] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.278 [2024-05-12 04:59:08.027610] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:01.278 [2024-05-12 04:59:08.030985] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.278 [2024-05-12 04:59:08.031019] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:01.278 [2024-05-12 04:59:08.031035] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.354 ms 00:19:01.278 [2024-05-12 04:59:08.031046] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.278 [2024-05-12 04:59:08.031385] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.278 [2024-05-12 04:59:08.031418] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:01.278 [2024-05-12 04:59:08.031431] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:19:01.278 [2024-05-12 04:59:08.031442] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.278 [2024-05-12 04:59:08.035313] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.278 [2024-05-12 04:59:08.035350] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:01.278 [2024-05-12 04:59:08.035381] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.833 ms 00:19:01.278 [2024-05-12 04:59:08.035393] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.278 [2024-05-12 04:59:08.043070] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.278 [2024-05-12 04:59:08.043101] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:01.278 [2024-05-12 04:59:08.043116] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.652 ms 00:19:01.278 [2024-05-12 04:59:08.043127] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.278 [2024-05-12 04:59:08.074008] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.278 [2024-05-12 04:59:08.074052] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:01.278 [2024-05-12 04:59:08.074070] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.808 ms 00:19:01.278 [2024-05-12 04:59:08.074082] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.278 [2024-05-12 04:59:08.092104] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.278 [2024-05-12 04:59:08.092147] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:01.278 [2024-05-12 04:59:08.092171] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.940 ms 00:19:01.278 [2024-05-12 04:59:08.092183] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.278 [2024-05-12 04:59:08.092375] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.278 [2024-05-12 04:59:08.092397] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:01.278 [2024-05-12 04:59:08.092411] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:19:01.278 [2024-05-12 04:59:08.092422] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.278 [2024-05-12 04:59:08.126031] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.278 [2024-05-12 04:59:08.126074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:01.278 [2024-05-12 04:59:08.126105] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.585 ms 00:19:01.278 [2024-05-12 04:59:08.126117] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.278 [2024-05-12 04:59:08.157901] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.278 [2024-05-12 04:59:08.157942] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:01.278 [2024-05-12 04:59:08.157958] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.660 ms 00:19:01.278 [2024-05-12 04:59:08.157969] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.278 [2024-05-12 04:59:08.188558] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.278 [2024-05-12 04:59:08.188594] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:01.278 [2024-05-12 04:59:08.188639] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.513 ms 00:19:01.278 [2024-05-12 04:59:08.188648] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.278 [2024-05-12 04:59:08.218906] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.278 [2024-05-12 04:59:08.218953] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:01.278 [2024-05-12 04:59:08.218970] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.161 ms 00:19:01.278 [2024-05-12 04:59:08.218981] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.278 [2024-05-12 04:59:08.219058] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:01.278 [2024-05-12 04:59:08.219082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:01.278 [2024-05-12 04:59:08.219096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:01.278 [2024-05-12 04:59:08.219108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:01.278 [2024-05-12 04:59:08.219119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:01.278 [2024-05-12 04:59:08.219131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:01.278 [2024-05-12 04:59:08.219142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:01.278 [2024-05-12 04:59:08.219153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:01.278 [2024-05-12 04:59:08.219165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:01.278 [2024-05-12 04:59:08.219176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:01.278 [2024-05-12 04:59:08.219188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:01.278 [2024-05-12 04:59:08.219199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:01.278 [2024-05-12 04:59:08.219225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:01.278 [2024-05-12 04:59:08.219302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:01.278 [2024-05-12 04:59:08.219314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:01.278 [2024-05-12 04:59:08.219324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:01.278 [2024-05-12 04:59:08.219334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:01.278 [2024-05-12 04:59:08.219344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:01.278 [2024-05-12 04:59:08.219355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:01.278 [2024-05-12 04:59:08.219365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:01.278 [2024-05-12 04:59:08.219375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:01.278 [2024-05-12 04:59:08.219386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:01.278 [2024-05-12 04:59:08.219396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:01.278 [2024-05-12 04:59:08.219407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:01.278 [2024-05-12 04:59:08.219417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:01.278 [2024-05-12 04:59:08.219433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:01.278 [2024-05-12 04:59:08.219443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:01.278 [2024-05-12 04:59:08.219454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:01.278 [2024-05-12 04:59:08.219465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:01.278 [2024-05-12 04:59:08.219475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:01.278 [2024-05-12 04:59:08.219486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:01.278 [2024-05-12 04:59:08.219497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:01.278 [2024-05-12 04:59:08.219507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:01.278 [2024-05-12 04:59:08.219519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:01.278 [2024-05-12 04:59:08.219529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:01.278 [2024-05-12 04:59:08.219539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:01.278 [2024-05-12 04:59:08.219550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:01.278 [2024-05-12 04:59:08.219560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:01.278 [2024-05-12 04:59:08.219570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:01.278 [2024-05-12 04:59:08.219581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:01.278 [2024-05-12 04:59:08.219591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:01.278 [2024-05-12 04:59:08.219601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:01.278 [2024-05-12 04:59:08.219611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.219621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.219646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.219657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.219667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.219677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.219687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.219697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.219707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.219717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.219727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.219737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.219747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.219757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.219767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.219778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.219788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.219798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.219808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.219817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.219827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.219853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.219879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.219891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.219936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.219949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.219960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.219972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.219984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.219996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.220014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.220026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.220038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.220049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.220061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.220073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.220085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.220096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.220108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.220119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.220131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.220142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.220154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.220166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.220178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.220190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.220202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.220213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.220239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.220277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.220289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.220313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.220324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.220334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.220344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.220354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.220364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.220374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.220384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:01.279 [2024-05-12 04:59:08.220402] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:01.279 [2024-05-12 04:59:08.220425] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c7a363c9-0ae1-4712-9c4d-0d43e1b35de1 00:19:01.279 [2024-05-12 04:59:08.220435] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:01.279 [2024-05-12 04:59:08.220445] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:01.279 [2024-05-12 04:59:08.220469] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:01.279 [2024-05-12 04:59:08.220479] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:01.279 [2024-05-12 04:59:08.220490] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:01.279 [2024-05-12 04:59:08.220500] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:01.279 [2024-05-12 04:59:08.220510] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:01.279 [2024-05-12 04:59:08.220519] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:01.279 [2024-05-12 04:59:08.220527] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:01.279 [2024-05-12 04:59:08.220538] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.279 [2024-05-12 04:59:08.220552] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:01.279 [2024-05-12 04:59:08.220563] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.481 ms 00:19:01.279 [2024-05-12 04:59:08.220573] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.279 [2024-05-12 04:59:08.237677] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.279 [2024-05-12 04:59:08.237712] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:01.279 [2024-05-12 04:59:08.237743] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.065 ms 00:19:01.279 [2024-05-12 04:59:08.237754] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.279 [2024-05-12 04:59:08.238060] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.279 [2024-05-12 04:59:08.238078] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:01.279 [2024-05-12 04:59:08.238091] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:19:01.279 [2024-05-12 04:59:08.238103] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.279 [2024-05-12 04:59:08.285259] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.279 [2024-05-12 04:59:08.285349] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:01.279 [2024-05-12 04:59:08.285366] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.279 [2024-05-12 04:59:08.285377] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.279 [2024-05-12 04:59:08.285507] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.279 [2024-05-12 04:59:08.285523] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:01.279 [2024-05-12 04:59:08.285534] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.279 [2024-05-12 04:59:08.285544] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.279 [2024-05-12 04:59:08.285603] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.279 [2024-05-12 04:59:08.285619] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:01.279 [2024-05-12 04:59:08.285630] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.279 [2024-05-12 04:59:08.285640] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.279 [2024-05-12 04:59:08.285678] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.279 [2024-05-12 04:59:08.285727] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:01.279 [2024-05-12 04:59:08.285739] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.279 [2024-05-12 04:59:08.285749] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.279 [2024-05-12 04:59:08.372555] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.279 [2024-05-12 04:59:08.372650] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:01.279 [2024-05-12 04:59:08.372683] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.279 [2024-05-12 04:59:08.372695] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.539 [2024-05-12 04:59:08.411268] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.539 [2024-05-12 04:59:08.411352] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:01.539 [2024-05-12 04:59:08.411385] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.539 [2024-05-12 04:59:08.411396] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.539 [2024-05-12 04:59:08.411488] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.539 [2024-05-12 04:59:08.411503] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:01.539 [2024-05-12 04:59:08.411514] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.539 [2024-05-12 04:59:08.411525] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.539 [2024-05-12 04:59:08.411557] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.539 [2024-05-12 04:59:08.411569] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:01.539 [2024-05-12 04:59:08.411587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.539 [2024-05-12 04:59:08.411597] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.539 [2024-05-12 04:59:08.411747] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.539 [2024-05-12 04:59:08.411796] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:01.539 [2024-05-12 04:59:08.411823] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.539 [2024-05-12 04:59:08.411834] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.539 [2024-05-12 04:59:08.411912] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.539 [2024-05-12 04:59:08.411929] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:01.539 [2024-05-12 04:59:08.411941] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.539 [2024-05-12 04:59:08.411958] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.539 [2024-05-12 04:59:08.412007] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.539 [2024-05-12 04:59:08.412022] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:01.539 [2024-05-12 04:59:08.412034] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.539 [2024-05-12 04:59:08.412045] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.539 [2024-05-12 04:59:08.412100] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.539 [2024-05-12 04:59:08.412117] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:01.539 [2024-05-12 04:59:08.412134] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.539 [2024-05-12 04:59:08.412149] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.539 [2024-05-12 04:59:08.412352] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 384.899 ms, result 0 00:19:02.475 00:19:02.475 00:19:02.475 04:59:09 -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:19:02.475 04:59:09 -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:03.043 04:59:09 -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:03.043 [2024-05-12 04:59:10.067723] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:03.043 [2024-05-12 04:59:10.067869] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73784 ] 00:19:03.302 [2024-05-12 04:59:10.227608] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.302 [2024-05-12 04:59:10.401789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:03.870 [2024-05-12 04:59:10.703245] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:03.870 [2024-05-12 04:59:10.703370] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:03.870 [2024-05-12 04:59:10.862665] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.870 [2024-05-12 04:59:10.862714] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:03.870 [2024-05-12 04:59:10.862750] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:03.870 [2024-05-12 04:59:10.862766] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.870 [2024-05-12 04:59:10.866047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.870 [2024-05-12 04:59:10.866097] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:03.870 [2024-05-12 04:59:10.866114] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.254 ms 00:19:03.870 [2024-05-12 04:59:10.866127] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.870 [2024-05-12 04:59:10.866275] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:03.870 [2024-05-12 04:59:10.867310] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:03.870 [2024-05-12 04:59:10.867352] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.870 [2024-05-12 04:59:10.867371] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:03.870 [2024-05-12 04:59:10.867384] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.087 ms 00:19:03.870 [2024-05-12 04:59:10.867395] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.870 [2024-05-12 04:59:10.868669] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:03.870 [2024-05-12 04:59:10.885985] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.870 [2024-05-12 04:59:10.886023] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:03.870 [2024-05-12 04:59:10.886055] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.317 ms 00:19:03.870 [2024-05-12 04:59:10.886066] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.870 [2024-05-12 04:59:10.886172] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.870 [2024-05-12 04:59:10.886193] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:03.870 [2024-05-12 04:59:10.886209] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:19:03.871 [2024-05-12 04:59:10.886273] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.871 [2024-05-12 04:59:10.891178] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.871 [2024-05-12 04:59:10.891232] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:03.871 [2024-05-12 04:59:10.891249] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.846 ms 00:19:03.871 [2024-05-12 04:59:10.891260] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.871 [2024-05-12 04:59:10.891395] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.871 [2024-05-12 04:59:10.891420] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:03.871 [2024-05-12 04:59:10.891433] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:19:03.871 [2024-05-12 04:59:10.891445] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.871 [2024-05-12 04:59:10.891485] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.871 [2024-05-12 04:59:10.891500] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:03.871 [2024-05-12 04:59:10.891513] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:03.871 [2024-05-12 04:59:10.891524] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.871 [2024-05-12 04:59:10.891560] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:03.871 [2024-05-12 04:59:10.895662] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.871 [2024-05-12 04:59:10.895702] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:03.871 [2024-05-12 04:59:10.895717] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.116 ms 00:19:03.871 [2024-05-12 04:59:10.895729] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.871 [2024-05-12 04:59:10.895798] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.871 [2024-05-12 04:59:10.895821] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:03.871 [2024-05-12 04:59:10.895834] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:03.871 [2024-05-12 04:59:10.895845] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.871 [2024-05-12 04:59:10.895878] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:03.871 [2024-05-12 04:59:10.895949] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:19:03.871 [2024-05-12 04:59:10.895991] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:03.871 [2024-05-12 04:59:10.896011] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:19:03.871 [2024-05-12 04:59:10.896098] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:03.871 [2024-05-12 04:59:10.896115] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:03.871 [2024-05-12 04:59:10.896130] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:03.871 [2024-05-12 04:59:10.896144] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:03.871 [2024-05-12 04:59:10.896157] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:03.871 [2024-05-12 04:59:10.896170] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:03.871 [2024-05-12 04:59:10.896181] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:03.871 [2024-05-12 04:59:10.896192] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:03.871 [2024-05-12 04:59:10.896202] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:03.871 [2024-05-12 04:59:10.896214] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.871 [2024-05-12 04:59:10.896252] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:03.871 [2024-05-12 04:59:10.896274] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:19:03.871 [2024-05-12 04:59:10.896285] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.871 [2024-05-12 04:59:10.896390] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.871 [2024-05-12 04:59:10.896409] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:03.871 [2024-05-12 04:59:10.896421] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:19:03.871 [2024-05-12 04:59:10.896432] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.871 [2024-05-12 04:59:10.896523] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:03.871 [2024-05-12 04:59:10.896540] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:03.871 [2024-05-12 04:59:10.896552] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:03.871 [2024-05-12 04:59:10.896569] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:03.871 [2024-05-12 04:59:10.896581] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:03.871 [2024-05-12 04:59:10.896592] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:03.871 [2024-05-12 04:59:10.896602] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:03.871 [2024-05-12 04:59:10.896612] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:03.871 [2024-05-12 04:59:10.896623] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:03.871 [2024-05-12 04:59:10.896633] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:03.871 [2024-05-12 04:59:10.896644] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:03.871 [2024-05-12 04:59:10.896654] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:03.871 [2024-05-12 04:59:10.896664] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:03.871 [2024-05-12 04:59:10.896674] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:03.871 [2024-05-12 04:59:10.896685] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:19:03.871 [2024-05-12 04:59:10.896696] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:03.871 [2024-05-12 04:59:10.896706] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:03.871 [2024-05-12 04:59:10.896717] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:19:03.871 [2024-05-12 04:59:10.896727] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:03.871 [2024-05-12 04:59:10.896750] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:03.871 [2024-05-12 04:59:10.896760] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:19:03.871 [2024-05-12 04:59:10.896771] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:03.871 [2024-05-12 04:59:10.896781] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:03.871 [2024-05-12 04:59:10.896791] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:03.871 [2024-05-12 04:59:10.896801] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:03.871 [2024-05-12 04:59:10.896811] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:03.871 [2024-05-12 04:59:10.896821] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:19:03.871 [2024-05-12 04:59:10.896831] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:03.871 [2024-05-12 04:59:10.896841] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:03.871 [2024-05-12 04:59:10.896851] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:03.871 [2024-05-12 04:59:10.896861] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:03.871 [2024-05-12 04:59:10.896871] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:03.871 [2024-05-12 04:59:10.896882] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:19:03.871 [2024-05-12 04:59:10.896892] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:03.871 [2024-05-12 04:59:10.896902] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:03.871 [2024-05-12 04:59:10.896912] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:03.871 [2024-05-12 04:59:10.896922] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:03.871 [2024-05-12 04:59:10.896932] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:03.871 [2024-05-12 04:59:10.896942] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:19:03.871 [2024-05-12 04:59:10.896952] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:03.871 [2024-05-12 04:59:10.896962] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:03.871 [2024-05-12 04:59:10.896973] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:03.871 [2024-05-12 04:59:10.896984] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:03.871 [2024-05-12 04:59:10.896994] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:03.871 [2024-05-12 04:59:10.897006] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:03.871 [2024-05-12 04:59:10.897016] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:03.871 [2024-05-12 04:59:10.897026] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:03.871 [2024-05-12 04:59:10.897038] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:03.871 [2024-05-12 04:59:10.897048] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:03.871 [2024-05-12 04:59:10.897058] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:03.871 [2024-05-12 04:59:10.897070] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:03.871 [2024-05-12 04:59:10.897089] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:03.871 [2024-05-12 04:59:10.897102] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:03.871 [2024-05-12 04:59:10.897114] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:19:03.871 [2024-05-12 04:59:10.897126] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:19:03.871 [2024-05-12 04:59:10.897137] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:19:03.871 [2024-05-12 04:59:10.897148] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:19:03.871 [2024-05-12 04:59:10.897159] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:19:03.871 [2024-05-12 04:59:10.897171] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:19:03.871 [2024-05-12 04:59:10.897182] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:19:03.872 [2024-05-12 04:59:10.897193] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:19:03.872 [2024-05-12 04:59:10.897204] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:19:03.872 [2024-05-12 04:59:10.897233] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:19:03.872 [2024-05-12 04:59:10.897248] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:19:03.872 [2024-05-12 04:59:10.897259] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:19:03.872 [2024-05-12 04:59:10.897271] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:03.872 [2024-05-12 04:59:10.897284] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:03.872 [2024-05-12 04:59:10.897296] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:03.872 [2024-05-12 04:59:10.897308] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:03.872 [2024-05-12 04:59:10.897319] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:03.872 [2024-05-12 04:59:10.897330] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:03.872 [2024-05-12 04:59:10.897343] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.872 [2024-05-12 04:59:10.897361] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:03.872 [2024-05-12 04:59:10.897373] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.866 ms 00:19:03.872 [2024-05-12 04:59:10.897384] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.872 [2024-05-12 04:59:10.915618] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.872 [2024-05-12 04:59:10.915664] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:03.872 [2024-05-12 04:59:10.915698] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.171 ms 00:19:03.872 [2024-05-12 04:59:10.915709] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.872 [2024-05-12 04:59:10.915855] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.872 [2024-05-12 04:59:10.915873] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:03.872 [2024-05-12 04:59:10.915886] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:19:03.872 [2024-05-12 04:59:10.915906] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.872 [2024-05-12 04:59:10.968798] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.872 [2024-05-12 04:59:10.968851] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:03.872 [2024-05-12 04:59:10.968871] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.840 ms 00:19:03.872 [2024-05-12 04:59:10.968883] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.872 [2024-05-12 04:59:10.969002] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.872 [2024-05-12 04:59:10.969022] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:03.872 [2024-05-12 04:59:10.969037] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:03.872 [2024-05-12 04:59:10.969048] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.872 [2024-05-12 04:59:10.969466] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.872 [2024-05-12 04:59:10.969485] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:03.872 [2024-05-12 04:59:10.969498] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.370 ms 00:19:03.872 [2024-05-12 04:59:10.969509] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.872 [2024-05-12 04:59:10.969670] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.872 [2024-05-12 04:59:10.969690] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:03.872 [2024-05-12 04:59:10.969703] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:19:03.872 [2024-05-12 04:59:10.969714] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.872 [2024-05-12 04:59:10.987702] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.872 [2024-05-12 04:59:10.987744] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:03.872 [2024-05-12 04:59:10.987760] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.958 ms 00:19:03.872 [2024-05-12 04:59:10.987771] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.131 [2024-05-12 04:59:11.004887] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:04.131 [2024-05-12 04:59:11.004934] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:04.131 [2024-05-12 04:59:11.004953] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.131 [2024-05-12 04:59:11.004965] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:04.131 [2024-05-12 04:59:11.004979] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.009 ms 00:19:04.131 [2024-05-12 04:59:11.004990] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.131 [2024-05-12 04:59:11.034893] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.131 [2024-05-12 04:59:11.034932] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:04.131 [2024-05-12 04:59:11.034964] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.809 ms 00:19:04.131 [2024-05-12 04:59:11.034982] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.131 [2024-05-12 04:59:11.051260] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.131 [2024-05-12 04:59:11.051359] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:04.131 [2024-05-12 04:59:11.051377] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.176 ms 00:19:04.131 [2024-05-12 04:59:11.051388] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.131 [2024-05-12 04:59:11.066958] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.131 [2024-05-12 04:59:11.067008] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:04.131 [2024-05-12 04:59:11.067040] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.481 ms 00:19:04.131 [2024-05-12 04:59:11.067052] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.131 [2024-05-12 04:59:11.067622] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.131 [2024-05-12 04:59:11.067652] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:04.131 [2024-05-12 04:59:11.067667] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:19:04.131 [2024-05-12 04:59:11.067678] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.131 [2024-05-12 04:59:11.145476] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.131 [2024-05-12 04:59:11.145531] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:04.131 [2024-05-12 04:59:11.145561] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.764 ms 00:19:04.131 [2024-05-12 04:59:11.145573] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.131 [2024-05-12 04:59:11.159092] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:04.131 [2024-05-12 04:59:11.173500] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.131 [2024-05-12 04:59:11.173560] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:04.131 [2024-05-12 04:59:11.173579] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.739 ms 00:19:04.131 [2024-05-12 04:59:11.173590] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.131 [2024-05-12 04:59:11.173714] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.131 [2024-05-12 04:59:11.173735] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:04.131 [2024-05-12 04:59:11.173748] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:04.131 [2024-05-12 04:59:11.173758] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.131 [2024-05-12 04:59:11.173835] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.131 [2024-05-12 04:59:11.173856] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:04.131 [2024-05-12 04:59:11.173868] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:19:04.131 [2024-05-12 04:59:11.173878] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.131 [2024-05-12 04:59:11.175819] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.131 [2024-05-12 04:59:11.175854] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:04.131 [2024-05-12 04:59:11.175884] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.914 ms 00:19:04.131 [2024-05-12 04:59:11.175902] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.131 [2024-05-12 04:59:11.175959] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.131 [2024-05-12 04:59:11.175974] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:04.131 [2024-05-12 04:59:11.175986] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:04.131 [2024-05-12 04:59:11.176003] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.131 [2024-05-12 04:59:11.176046] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:04.131 [2024-05-12 04:59:11.176062] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.131 [2024-05-12 04:59:11.176073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:04.131 [2024-05-12 04:59:11.176085] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:04.131 [2024-05-12 04:59:11.176097] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.131 [2024-05-12 04:59:11.207932] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.131 [2024-05-12 04:59:11.207989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:04.131 [2024-05-12 04:59:11.208015] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.804 ms 00:19:04.131 [2024-05-12 04:59:11.208027] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.131 [2024-05-12 04:59:11.208152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.131 [2024-05-12 04:59:11.208173] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:04.131 [2024-05-12 04:59:11.208187] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:04.131 [2024-05-12 04:59:11.208198] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.131 [2024-05-12 04:59:11.209220] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:04.131 [2024-05-12 04:59:11.213851] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 346.191 ms, result 0 00:19:04.131 [2024-05-12 04:59:11.214744] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:04.131 [2024-05-12 04:59:11.231619] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:04.390  Copying: 4096/4096 [kB] (average 24 MBps)[2024-05-12 04:59:11.401279] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:04.390 [2024-05-12 04:59:11.413675] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.390 [2024-05-12 04:59:11.413715] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:04.390 [2024-05-12 04:59:11.413732] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:04.390 [2024-05-12 04:59:11.413750] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.390 [2024-05-12 04:59:11.413781] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:04.390 [2024-05-12 04:59:11.416976] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.390 [2024-05-12 04:59:11.417004] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:04.390 [2024-05-12 04:59:11.417035] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.175 ms 00:19:04.390 [2024-05-12 04:59:11.417045] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.390 [2024-05-12 04:59:11.418751] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.390 [2024-05-12 04:59:11.418790] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:04.390 [2024-05-12 04:59:11.418805] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.680 ms 00:19:04.390 [2024-05-12 04:59:11.418816] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.390 [2024-05-12 04:59:11.423070] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.390 [2024-05-12 04:59:11.423114] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:04.390 [2024-05-12 04:59:11.423129] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.232 ms 00:19:04.390 [2024-05-12 04:59:11.423140] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.390 [2024-05-12 04:59:11.430354] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.390 [2024-05-12 04:59:11.430384] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:04.390 [2024-05-12 04:59:11.430413] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.173 ms 00:19:04.390 [2024-05-12 04:59:11.430423] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.390 [2024-05-12 04:59:11.462247] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.390 [2024-05-12 04:59:11.462310] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:04.390 [2024-05-12 04:59:11.462358] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.749 ms 00:19:04.390 [2024-05-12 04:59:11.462368] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.390 [2024-05-12 04:59:11.480071] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.390 [2024-05-12 04:59:11.480112] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:04.390 [2024-05-12 04:59:11.480134] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.612 ms 00:19:04.390 [2024-05-12 04:59:11.480146] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.390 [2024-05-12 04:59:11.480400] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.390 [2024-05-12 04:59:11.480422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:04.390 [2024-05-12 04:59:11.480435] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:19:04.390 [2024-05-12 04:59:11.480446] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.390 [2024-05-12 04:59:11.512749] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.390 [2024-05-12 04:59:11.512819] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:04.390 [2024-05-12 04:59:11.512865] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.279 ms 00:19:04.390 [2024-05-12 04:59:11.512876] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.650 [2024-05-12 04:59:11.546142] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.650 [2024-05-12 04:59:11.546260] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:04.650 [2024-05-12 04:59:11.546294] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.187 ms 00:19:04.650 [2024-05-12 04:59:11.546306] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.650 [2024-05-12 04:59:11.577483] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.650 [2024-05-12 04:59:11.577525] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:04.650 [2024-05-12 04:59:11.577542] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.096 ms 00:19:04.650 [2024-05-12 04:59:11.577553] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.650 [2024-05-12 04:59:11.608465] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.650 [2024-05-12 04:59:11.608500] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:04.650 [2024-05-12 04:59:11.608531] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.786 ms 00:19:04.650 [2024-05-12 04:59:11.608541] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.650 [2024-05-12 04:59:11.608613] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:04.650 [2024-05-12 04:59:11.608637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.608650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.608660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.608670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.608681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.608691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.608701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.608712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.608722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.608733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.608743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.608753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.608763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.608773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.608783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.608793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.608804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.608814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.608824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.608834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.608844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.608854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.608864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.608875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.608885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.608896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.608907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.608917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.608927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.608938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.608948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.608959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.608969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.608980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.608991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.609001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.609011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.609021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.609032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.609042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.609052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.609062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.609073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.609083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.609093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.609103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.609114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.609124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.609134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.609144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.609154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.609164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.609174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.609185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.609194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.609205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.609253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.609285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.609296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.609307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.609318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.609328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.609339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.609350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.609361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.609389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.609400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.609412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.609423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.609434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.609445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.609457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:04.650 [2024-05-12 04:59:11.609468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:04.651 [2024-05-12 04:59:11.609479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:04.651 [2024-05-12 04:59:11.609490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:04.651 [2024-05-12 04:59:11.609518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:04.651 [2024-05-12 04:59:11.609529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:04.651 [2024-05-12 04:59:11.609540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:04.651 [2024-05-12 04:59:11.609552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:04.651 [2024-05-12 04:59:11.609563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:04.651 [2024-05-12 04:59:11.609574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:04.651 [2024-05-12 04:59:11.609586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:04.651 [2024-05-12 04:59:11.609597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:04.651 [2024-05-12 04:59:11.609609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:04.651 [2024-05-12 04:59:11.609620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:04.651 [2024-05-12 04:59:11.609631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:04.651 [2024-05-12 04:59:11.609642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:04.651 [2024-05-12 04:59:11.609654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:04.651 [2024-05-12 04:59:11.609665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:04.651 [2024-05-12 04:59:11.609677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:04.651 [2024-05-12 04:59:11.609688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:04.651 [2024-05-12 04:59:11.609699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:04.651 [2024-05-12 04:59:11.609710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:04.651 [2024-05-12 04:59:11.609722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:04.651 [2024-05-12 04:59:11.609733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:04.651 [2024-05-12 04:59:11.609744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:04.651 [2024-05-12 04:59:11.609756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:04.651 [2024-05-12 04:59:11.609768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:04.651 [2024-05-12 04:59:11.609787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:04.651 [2024-05-12 04:59:11.609799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:04.651 [2024-05-12 04:59:11.609820] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:04.651 [2024-05-12 04:59:11.609845] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c7a363c9-0ae1-4712-9c4d-0d43e1b35de1 00:19:04.651 [2024-05-12 04:59:11.609857] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:04.651 [2024-05-12 04:59:11.609868] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:04.651 [2024-05-12 04:59:11.609879] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:04.651 [2024-05-12 04:59:11.609890] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:04.651 [2024-05-12 04:59:11.609900] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:04.651 [2024-05-12 04:59:11.609912] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:04.651 [2024-05-12 04:59:11.609923] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:04.651 [2024-05-12 04:59:11.609933] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:04.651 [2024-05-12 04:59:11.609943] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:04.651 [2024-05-12 04:59:11.609954] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.651 [2024-05-12 04:59:11.609970] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:04.651 [2024-05-12 04:59:11.609982] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.343 ms 00:19:04.651 [2024-05-12 04:59:11.609994] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.651 [2024-05-12 04:59:11.626775] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.651 [2024-05-12 04:59:11.626826] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:04.651 [2024-05-12 04:59:11.626842] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.756 ms 00:19:04.651 [2024-05-12 04:59:11.626853] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.651 [2024-05-12 04:59:11.627181] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.651 [2024-05-12 04:59:11.627199] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:04.651 [2024-05-12 04:59:11.627211] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:19:04.651 [2024-05-12 04:59:11.627238] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.651 [2024-05-12 04:59:11.676860] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:04.651 [2024-05-12 04:59:11.676902] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:04.651 [2024-05-12 04:59:11.676918] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:04.651 [2024-05-12 04:59:11.676929] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.651 [2024-05-12 04:59:11.677034] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:04.651 [2024-05-12 04:59:11.677052] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:04.651 [2024-05-12 04:59:11.677064] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:04.651 [2024-05-12 04:59:11.677075] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.651 [2024-05-12 04:59:11.677132] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:04.651 [2024-05-12 04:59:11.677149] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:04.651 [2024-05-12 04:59:11.677176] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:04.651 [2024-05-12 04:59:11.677186] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.651 [2024-05-12 04:59:11.677246] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:04.651 [2024-05-12 04:59:11.677296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:04.651 [2024-05-12 04:59:11.677311] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:04.651 [2024-05-12 04:59:11.677321] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.910 [2024-05-12 04:59:11.778097] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:04.910 [2024-05-12 04:59:11.778161] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:04.910 [2024-05-12 04:59:11.778195] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:04.910 [2024-05-12 04:59:11.778206] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.910 [2024-05-12 04:59:11.818209] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:04.910 [2024-05-12 04:59:11.818281] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:04.910 [2024-05-12 04:59:11.818315] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:04.910 [2024-05-12 04:59:11.818327] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.910 [2024-05-12 04:59:11.818425] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:04.910 [2024-05-12 04:59:11.818444] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:04.910 [2024-05-12 04:59:11.818456] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:04.910 [2024-05-12 04:59:11.818468] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.910 [2024-05-12 04:59:11.818503] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:04.910 [2024-05-12 04:59:11.818517] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:04.910 [2024-05-12 04:59:11.818536] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:04.910 [2024-05-12 04:59:11.818547] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.910 [2024-05-12 04:59:11.818666] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:04.910 [2024-05-12 04:59:11.818685] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:04.910 [2024-05-12 04:59:11.818697] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:04.910 [2024-05-12 04:59:11.818708] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.910 [2024-05-12 04:59:11.818758] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:04.910 [2024-05-12 04:59:11.818774] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:04.910 [2024-05-12 04:59:11.818792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:04.910 [2024-05-12 04:59:11.818804] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.910 [2024-05-12 04:59:11.818850] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:04.910 [2024-05-12 04:59:11.818873] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:04.910 [2024-05-12 04:59:11.818885] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:04.910 [2024-05-12 04:59:11.818896] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.911 [2024-05-12 04:59:11.818950] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:04.911 [2024-05-12 04:59:11.818967] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:04.911 [2024-05-12 04:59:11.818984] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:04.911 [2024-05-12 04:59:11.818999] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.911 [2024-05-12 04:59:11.819164] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 405.487 ms, result 0 00:19:05.844 00:19:05.844 00:19:05.844 04:59:12 -- ftl/trim.sh@93 -- # svcpid=73815 00:19:05.844 04:59:12 -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:05.844 04:59:12 -- ftl/trim.sh@94 -- # waitforlisten 73815 00:19:05.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:05.844 04:59:12 -- common/autotest_common.sh@819 -- # '[' -z 73815 ']' 00:19:05.844 04:59:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:05.844 04:59:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:05.844 04:59:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:05.844 04:59:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:05.844 04:59:12 -- common/autotest_common.sh@10 -- # set +x 00:19:06.102 [2024-05-12 04:59:13.049736] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:06.102 [2024-05-12 04:59:13.050128] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73815 ] 00:19:06.102 [2024-05-12 04:59:13.221272] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.361 [2024-05-12 04:59:13.408517] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:06.361 [2024-05-12 04:59:13.408962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.735 04:59:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:07.735 04:59:14 -- common/autotest_common.sh@852 -- # return 0 00:19:07.735 04:59:14 -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:07.735 [2024-05-12 04:59:14.852207] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:07.735 [2024-05-12 04:59:14.852364] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:07.994 [2024-05-12 04:59:15.020949] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.994 [2024-05-12 04:59:15.020999] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:07.994 [2024-05-12 04:59:15.021036] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:07.994 [2024-05-12 04:59:15.021047] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.994 [2024-05-12 04:59:15.024157] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.994 [2024-05-12 04:59:15.024207] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:07.994 [2024-05-12 04:59:15.024292] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.084 ms 00:19:07.994 [2024-05-12 04:59:15.024335] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.994 [2024-05-12 04:59:15.024489] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:07.994 [2024-05-12 04:59:15.025468] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:07.994 [2024-05-12 04:59:15.025509] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.994 [2024-05-12 04:59:15.025539] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:07.994 [2024-05-12 04:59:15.025552] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.035 ms 00:19:07.994 [2024-05-12 04:59:15.025563] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.994 [2024-05-12 04:59:15.026822] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:07.994 [2024-05-12 04:59:15.042778] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.994 [2024-05-12 04:59:15.042884] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:07.994 [2024-05-12 04:59:15.042936] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.961 ms 00:19:07.994 [2024-05-12 04:59:15.042953] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.994 [2024-05-12 04:59:15.043086] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.994 [2024-05-12 04:59:15.043118] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:07.994 [2024-05-12 04:59:15.043131] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:07.994 [2024-05-12 04:59:15.043143] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.994 [2024-05-12 04:59:15.047420] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.994 [2024-05-12 04:59:15.047464] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:07.994 [2024-05-12 04:59:15.047496] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.220 ms 00:19:07.994 [2024-05-12 04:59:15.047511] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.994 [2024-05-12 04:59:15.047630] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.994 [2024-05-12 04:59:15.047650] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:07.994 [2024-05-12 04:59:15.047663] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:19:07.994 [2024-05-12 04:59:15.047674] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.994 [2024-05-12 04:59:15.047707] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.994 [2024-05-12 04:59:15.047723] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:07.994 [2024-05-12 04:59:15.047738] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:07.994 [2024-05-12 04:59:15.047766] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.994 [2024-05-12 04:59:15.047801] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:07.994 [2024-05-12 04:59:15.052198] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.994 [2024-05-12 04:59:15.052291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:07.994 [2024-05-12 04:59:15.052341] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.405 ms 00:19:07.994 [2024-05-12 04:59:15.052353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.994 [2024-05-12 04:59:15.052421] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.994 [2024-05-12 04:59:15.052437] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:07.994 [2024-05-12 04:59:15.052451] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:07.994 [2024-05-12 04:59:15.052461] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.994 [2024-05-12 04:59:15.052491] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:07.994 [2024-05-12 04:59:15.052516] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:19:07.994 [2024-05-12 04:59:15.052592] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:07.994 [2024-05-12 04:59:15.052613] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:19:07.994 [2024-05-12 04:59:15.052700] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:07.994 [2024-05-12 04:59:15.052717] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:07.994 [2024-05-12 04:59:15.052734] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:07.994 [2024-05-12 04:59:15.052750] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:07.994 [2024-05-12 04:59:15.052765] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:07.994 [2024-05-12 04:59:15.052780] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:07.994 [2024-05-12 04:59:15.052793] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:07.994 [2024-05-12 04:59:15.052804] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:07.994 [2024-05-12 04:59:15.052819] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:07.994 [2024-05-12 04:59:15.052831] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.994 [2024-05-12 04:59:15.052844] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:07.995 [2024-05-12 04:59:15.052856] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.346 ms 00:19:07.995 [2024-05-12 04:59:15.052868] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.995 [2024-05-12 04:59:15.052946] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.995 [2024-05-12 04:59:15.052962] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:07.995 [2024-05-12 04:59:15.052974] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:19:07.995 [2024-05-12 04:59:15.052990] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.995 [2024-05-12 04:59:15.053096] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:07.995 [2024-05-12 04:59:15.053117] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:07.995 [2024-05-12 04:59:15.053130] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:07.995 [2024-05-12 04:59:15.053143] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:07.995 [2024-05-12 04:59:15.053155] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:07.995 [2024-05-12 04:59:15.053170] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:07.995 [2024-05-12 04:59:15.053181] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:07.995 [2024-05-12 04:59:15.053196] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:07.995 [2024-05-12 04:59:15.053207] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:07.995 [2024-05-12 04:59:15.053219] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:07.995 [2024-05-12 04:59:15.053230] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:07.995 [2024-05-12 04:59:15.053243] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:07.995 [2024-05-12 04:59:15.053253] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:07.995 [2024-05-12 04:59:15.053266] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:07.995 [2024-05-12 04:59:15.053276] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:19:07.995 [2024-05-12 04:59:15.053302] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:07.995 [2024-05-12 04:59:15.053317] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:07.995 [2024-05-12 04:59:15.053331] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:19:07.995 [2024-05-12 04:59:15.053344] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:07.995 [2024-05-12 04:59:15.053356] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:07.995 [2024-05-12 04:59:15.053367] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:19:07.995 [2024-05-12 04:59:15.053380] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:07.995 [2024-05-12 04:59:15.053390] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:07.995 [2024-05-12 04:59:15.053404] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:07.995 [2024-05-12 04:59:15.053415] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:07.995 [2024-05-12 04:59:15.053427] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:07.995 [2024-05-12 04:59:15.053439] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:19:07.995 [2024-05-12 04:59:15.053451] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:07.995 [2024-05-12 04:59:15.053462] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:07.995 [2024-05-12 04:59:15.053474] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:07.995 [2024-05-12 04:59:15.053498] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:07.995 [2024-05-12 04:59:15.053512] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:07.995 [2024-05-12 04:59:15.053523] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:19:07.995 [2024-05-12 04:59:15.053535] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:07.995 [2024-05-12 04:59:15.053545] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:07.995 [2024-05-12 04:59:15.053558] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:07.995 [2024-05-12 04:59:15.053568] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:07.995 [2024-05-12 04:59:15.053581] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:07.995 [2024-05-12 04:59:15.053591] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:19:07.995 [2024-05-12 04:59:15.053606] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:07.995 [2024-05-12 04:59:15.053616] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:07.995 [2024-05-12 04:59:15.053630] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:07.995 [2024-05-12 04:59:15.053641] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:07.995 [2024-05-12 04:59:15.053654] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:07.995 [2024-05-12 04:59:15.053668] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:07.995 [2024-05-12 04:59:15.053681] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:07.995 [2024-05-12 04:59:15.053691] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:07.995 [2024-05-12 04:59:15.053704] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:07.995 [2024-05-12 04:59:15.053714] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:07.995 [2024-05-12 04:59:15.053726] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:07.995 [2024-05-12 04:59:15.053739] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:07.995 [2024-05-12 04:59:15.053756] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:07.995 [2024-05-12 04:59:15.053769] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:07.995 [2024-05-12 04:59:15.053783] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:19:07.995 [2024-05-12 04:59:15.053795] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:19:07.995 [2024-05-12 04:59:15.053811] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:19:07.995 [2024-05-12 04:59:15.053823] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:19:07.995 [2024-05-12 04:59:15.053837] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:19:07.995 [2024-05-12 04:59:15.053849] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:19:07.995 [2024-05-12 04:59:15.053862] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:19:07.995 [2024-05-12 04:59:15.053874] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:19:07.995 [2024-05-12 04:59:15.053887] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:19:07.995 [2024-05-12 04:59:15.053899] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:19:07.995 [2024-05-12 04:59:15.053914] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:19:07.995 [2024-05-12 04:59:15.053926] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:19:07.995 [2024-05-12 04:59:15.053939] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:07.995 [2024-05-12 04:59:15.053952] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:07.995 [2024-05-12 04:59:15.053967] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:07.995 [2024-05-12 04:59:15.053979] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:07.995 [2024-05-12 04:59:15.053992] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:07.995 [2024-05-12 04:59:15.054004] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:07.995 [2024-05-12 04:59:15.054020] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.995 [2024-05-12 04:59:15.054032] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:07.995 [2024-05-12 04:59:15.054046] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.964 ms 00:19:07.995 [2024-05-12 04:59:15.054057] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.995 [2024-05-12 04:59:15.071576] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.995 [2024-05-12 04:59:15.071617] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:07.995 [2024-05-12 04:59:15.071652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.454 ms 00:19:07.995 [2024-05-12 04:59:15.071664] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.995 [2024-05-12 04:59:15.071804] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.995 [2024-05-12 04:59:15.071831] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:07.995 [2024-05-12 04:59:15.071848] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:19:07.995 [2024-05-12 04:59:15.071858] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.995 [2024-05-12 04:59:15.107719] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.995 [2024-05-12 04:59:15.107765] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:07.995 [2024-05-12 04:59:15.107801] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.833 ms 00:19:07.995 [2024-05-12 04:59:15.107812] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.995 [2024-05-12 04:59:15.107949] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.995 [2024-05-12 04:59:15.107966] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:07.995 [2024-05-12 04:59:15.107980] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:19:07.995 [2024-05-12 04:59:15.107992] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.995 [2024-05-12 04:59:15.108394] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.995 [2024-05-12 04:59:15.108413] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:07.995 [2024-05-12 04:59:15.108459] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:19:07.995 [2024-05-12 04:59:15.108471] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.995 [2024-05-12 04:59:15.108608] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.995 [2024-05-12 04:59:15.108625] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:07.996 [2024-05-12 04:59:15.108638] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:19:07.996 [2024-05-12 04:59:15.108649] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.254 [2024-05-12 04:59:15.126476] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.254 [2024-05-12 04:59:15.126520] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:08.254 [2024-05-12 04:59:15.126540] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.798 ms 00:19:08.254 [2024-05-12 04:59:15.126553] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.254 [2024-05-12 04:59:15.142029] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:08.254 [2024-05-12 04:59:15.142069] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:08.254 [2024-05-12 04:59:15.142104] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.254 [2024-05-12 04:59:15.142115] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:08.254 [2024-05-12 04:59:15.142129] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.370 ms 00:19:08.254 [2024-05-12 04:59:15.142139] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.254 [2024-05-12 04:59:15.169875] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.254 [2024-05-12 04:59:15.169912] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:08.254 [2024-05-12 04:59:15.169951] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.635 ms 00:19:08.254 [2024-05-12 04:59:15.169961] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.254 [2024-05-12 04:59:15.184725] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.254 [2024-05-12 04:59:15.184766] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:08.254 [2024-05-12 04:59:15.184785] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.681 ms 00:19:08.254 [2024-05-12 04:59:15.184796] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.254 [2024-05-12 04:59:15.200360] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.254 [2024-05-12 04:59:15.200400] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:08.254 [2024-05-12 04:59:15.200422] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.477 ms 00:19:08.254 [2024-05-12 04:59:15.200434] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.254 [2024-05-12 04:59:15.200910] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.254 [2024-05-12 04:59:15.200937] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:08.254 [2024-05-12 04:59:15.200954] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:19:08.254 [2024-05-12 04:59:15.200965] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.254 [2024-05-12 04:59:15.274876] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.254 [2024-05-12 04:59:15.274945] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:08.254 [2024-05-12 04:59:15.274986] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.877 ms 00:19:08.254 [2024-05-12 04:59:15.274999] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.254 [2024-05-12 04:59:15.287676] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:08.254 [2024-05-12 04:59:15.300971] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.254 [2024-05-12 04:59:15.301057] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:08.254 [2024-05-12 04:59:15.301077] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.839 ms 00:19:08.254 [2024-05-12 04:59:15.301090] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.254 [2024-05-12 04:59:15.301207] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.254 [2024-05-12 04:59:15.301285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:08.255 [2024-05-12 04:59:15.301303] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:08.255 [2024-05-12 04:59:15.301317] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.255 [2024-05-12 04:59:15.301382] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.255 [2024-05-12 04:59:15.301402] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:08.255 [2024-05-12 04:59:15.301415] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:19:08.255 [2024-05-12 04:59:15.301429] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.255 [2024-05-12 04:59:15.303299] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.255 [2024-05-12 04:59:15.303335] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:08.255 [2024-05-12 04:59:15.303366] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.841 ms 00:19:08.255 [2024-05-12 04:59:15.303378] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.255 [2024-05-12 04:59:15.303412] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.255 [2024-05-12 04:59:15.303431] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:08.255 [2024-05-12 04:59:15.303446] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:08.255 [2024-05-12 04:59:15.303459] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.255 [2024-05-12 04:59:15.303502] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:08.255 [2024-05-12 04:59:15.303521] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.255 [2024-05-12 04:59:15.303533] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:08.255 [2024-05-12 04:59:15.303546] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:08.255 [2024-05-12 04:59:15.303557] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.255 [2024-05-12 04:59:15.333691] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.255 [2024-05-12 04:59:15.333860] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:08.255 [2024-05-12 04:59:15.333984] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.101 ms 00:19:08.255 [2024-05-12 04:59:15.334036] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.255 [2024-05-12 04:59:15.334191] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.255 [2024-05-12 04:59:15.334284] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:08.255 [2024-05-12 04:59:15.334417] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:08.255 [2024-05-12 04:59:15.334534] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.255 [2024-05-12 04:59:15.335533] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:08.255 [2024-05-12 04:59:15.339656] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 314.172 ms, result 0 00:19:08.255 [2024-05-12 04:59:15.340852] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:08.255 Some configs were skipped because the RPC state that can call them passed over. 00:19:08.513 04:59:15 -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:08.771 [2024-05-12 04:59:15.652809] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.772 [2024-05-12 04:59:15.653062] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:19:08.772 [2024-05-12 04:59:15.653212] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.981 ms 00:19:08.772 [2024-05-12 04:59:15.653368] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.772 [2024-05-12 04:59:15.653467] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 32.635 ms, result 0 00:19:08.772 true 00:19:08.772 04:59:15 -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:09.030 [2024-05-12 04:59:15.946506] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.030 [2024-05-12 04:59:15.946556] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:19:09.030 [2024-05-12 04:59:15.946594] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.153 ms 00:19:09.030 [2024-05-12 04:59:15.946605] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.030 [2024-05-12 04:59:15.946667] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 29.329 ms, result 0 00:19:09.030 true 00:19:09.030 04:59:15 -- ftl/trim.sh@102 -- # killprocess 73815 00:19:09.030 04:59:15 -- common/autotest_common.sh@926 -- # '[' -z 73815 ']' 00:19:09.030 04:59:15 -- common/autotest_common.sh@930 -- # kill -0 73815 00:19:09.030 04:59:15 -- common/autotest_common.sh@931 -- # uname 00:19:09.030 04:59:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:09.030 04:59:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73815 00:19:09.030 killing process with pid 73815 00:19:09.030 04:59:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:09.030 04:59:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:09.030 04:59:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73815' 00:19:09.030 04:59:15 -- common/autotest_common.sh@945 -- # kill 73815 00:19:09.030 04:59:15 -- common/autotest_common.sh@950 -- # wait 73815 00:19:09.968 [2024-05-12 04:59:16.878613] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.968 [2024-05-12 04:59:16.878692] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:09.968 [2024-05-12 04:59:16.878712] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:09.968 [2024-05-12 04:59:16.878725] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.969 [2024-05-12 04:59:16.878753] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:09.969 [2024-05-12 04:59:16.881941] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.969 [2024-05-12 04:59:16.881974] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:09.969 [2024-05-12 04:59:16.882009] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.166 ms 00:19:09.969 [2024-05-12 04:59:16.882020] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.969 [2024-05-12 04:59:16.882363] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.969 [2024-05-12 04:59:16.882382] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:09.969 [2024-05-12 04:59:16.882397] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:19:09.969 [2024-05-12 04:59:16.882407] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.969 [2024-05-12 04:59:16.886674] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.969 [2024-05-12 04:59:16.886715] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:09.969 [2024-05-12 04:59:16.886734] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.239 ms 00:19:09.969 [2024-05-12 04:59:16.886748] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.969 [2024-05-12 04:59:16.893799] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.969 [2024-05-12 04:59:16.893830] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:09.969 [2024-05-12 04:59:16.893880] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.003 ms 00:19:09.969 [2024-05-12 04:59:16.893892] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.969 [2024-05-12 04:59:16.905961] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.969 [2024-05-12 04:59:16.905998] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:09.969 [2024-05-12 04:59:16.906045] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.010 ms 00:19:09.969 [2024-05-12 04:59:16.906056] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.969 [2024-05-12 04:59:16.914390] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.969 [2024-05-12 04:59:16.914429] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:09.969 [2024-05-12 04:59:16.914465] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.287 ms 00:19:09.969 [2024-05-12 04:59:16.914477] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.969 [2024-05-12 04:59:16.914636] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.969 [2024-05-12 04:59:16.914654] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:09.969 [2024-05-12 04:59:16.914668] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:19:09.969 [2024-05-12 04:59:16.914679] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.969 [2024-05-12 04:59:16.927003] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.969 [2024-05-12 04:59:16.927039] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:09.969 [2024-05-12 04:59:16.927072] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.298 ms 00:19:09.969 [2024-05-12 04:59:16.927083] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.969 [2024-05-12 04:59:16.939797] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.969 [2024-05-12 04:59:16.939832] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:09.969 [2024-05-12 04:59:16.939854] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.668 ms 00:19:09.969 [2024-05-12 04:59:16.939865] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.969 [2024-05-12 04:59:16.952006] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.969 [2024-05-12 04:59:16.952043] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:09.969 [2024-05-12 04:59:16.952061] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.070 ms 00:19:09.969 [2024-05-12 04:59:16.952073] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.969 [2024-05-12 04:59:16.964725] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.969 [2024-05-12 04:59:16.964761] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:09.969 [2024-05-12 04:59:16.964795] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.576 ms 00:19:09.969 [2024-05-12 04:59:16.964805] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.969 [2024-05-12 04:59:16.964882] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:09.969 [2024-05-12 04:59:16.964904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:09.969 [2024-05-12 04:59:16.964920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:09.969 [2024-05-12 04:59:16.964932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:09.969 [2024-05-12 04:59:16.964945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:09.969 [2024-05-12 04:59:16.964957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:09.969 [2024-05-12 04:59:16.964973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:09.969 [2024-05-12 04:59:16.964985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:09.969 [2024-05-12 04:59:16.964998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:09.969 [2024-05-12 04:59:16.965010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:09.969 [2024-05-12 04:59:16.965023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:09.969 [2024-05-12 04:59:16.965035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:09.969 [2024-05-12 04:59:16.965048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:09.969 [2024-05-12 04:59:16.965060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:09.969 [2024-05-12 04:59:16.965073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:09.969 [2024-05-12 04:59:16.965085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:09.969 [2024-05-12 04:59:16.965101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:09.969 [2024-05-12 04:59:16.965113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:09.969 [2024-05-12 04:59:16.965126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:09.969 [2024-05-12 04:59:16.965137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:09.969 [2024-05-12 04:59:16.965150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:09.969 [2024-05-12 04:59:16.965162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:09.969 [2024-05-12 04:59:16.965177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:09.969 [2024-05-12 04:59:16.965189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:09.969 [2024-05-12 04:59:16.965202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:09.969 [2024-05-12 04:59:16.965213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:09.969 [2024-05-12 04:59:16.965241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:09.969 [2024-05-12 04:59:16.965254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:09.969 [2024-05-12 04:59:16.965316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:09.969 [2024-05-12 04:59:16.965330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:09.969 [2024-05-12 04:59:16.965343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:09.969 [2024-05-12 04:59:16.965354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:09.969 [2024-05-12 04:59:16.965368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:09.969 [2024-05-12 04:59:16.965379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:09.969 [2024-05-12 04:59:16.965392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:09.969 [2024-05-12 04:59:16.965403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:09.969 [2024-05-12 04:59:16.965416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:09.969 [2024-05-12 04:59:16.965428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:09.969 [2024-05-12 04:59:16.965442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:09.969 [2024-05-12 04:59:16.965453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:09.969 [2024-05-12 04:59:16.965466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:09.969 [2024-05-12 04:59:16.965477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:09.969 [2024-05-12 04:59:16.965491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:09.969 [2024-05-12 04:59:16.965503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:09.969 [2024-05-12 04:59:16.965515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:09.969 [2024-05-12 04:59:16.965526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:09.969 [2024-05-12 04:59:16.965555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:09.969 [2024-05-12 04:59:16.965567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:09.969 [2024-05-12 04:59:16.965580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:09.969 [2024-05-12 04:59:16.965591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:09.969 [2024-05-12 04:59:16.965621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:09.970 [2024-05-12 04:59:16.965633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:09.970 [2024-05-12 04:59:16.965646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:09.970 [2024-05-12 04:59:16.965658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:09.970 [2024-05-12 04:59:16.965688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:09.970 [2024-05-12 04:59:16.965700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:09.970 [2024-05-12 04:59:16.965713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:09.970 [2024-05-12 04:59:16.965724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:09.970 [2024-05-12 04:59:16.965737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:09.970 [2024-05-12 04:59:16.965749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:09.970 [2024-05-12 04:59:16.965762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:09.970 [2024-05-12 04:59:16.965773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:09.970 [2024-05-12 04:59:16.965787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:09.970 [2024-05-12 04:59:16.965798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:09.970 [2024-05-12 04:59:16.965812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:09.970 [2024-05-12 04:59:16.965824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:09.970 [2024-05-12 04:59:16.965837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:09.970 [2024-05-12 04:59:16.965849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:09.970 [2024-05-12 04:59:16.965862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:09.970 [2024-05-12 04:59:16.965873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:09.970 [2024-05-12 04:59:16.965890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:09.970 [2024-05-12 04:59:16.965901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:09.970 [2024-05-12 04:59:16.965914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:09.970 [2024-05-12 04:59:16.965926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:09.970 [2024-05-12 04:59:16.965939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:09.970 [2024-05-12 04:59:16.965950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:09.970 [2024-05-12 04:59:16.965963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:09.970 [2024-05-12 04:59:16.965975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:09.970 [2024-05-12 04:59:16.965988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:09.970 [2024-05-12 04:59:16.966000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:09.970 [2024-05-12 04:59:16.966013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:09.970 [2024-05-12 04:59:16.966024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:09.970 [2024-05-12 04:59:16.966038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:09.970 [2024-05-12 04:59:16.966049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:09.970 [2024-05-12 04:59:16.966063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:09.970 [2024-05-12 04:59:16.966074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:09.970 [2024-05-12 04:59:16.966089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:09.970 [2024-05-12 04:59:16.966100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:09.970 [2024-05-12 04:59:16.966113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:09.970 [2024-05-12 04:59:16.966125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:09.970 [2024-05-12 04:59:16.966138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:09.970 [2024-05-12 04:59:16.966149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:09.970 [2024-05-12 04:59:16.966162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:09.970 [2024-05-12 04:59:16.966173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:09.970 [2024-05-12 04:59:16.966186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:09.970 [2024-05-12 04:59:16.966198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:09.970 [2024-05-12 04:59:16.966212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:09.970 [2024-05-12 04:59:16.966224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:09.970 [2024-05-12 04:59:16.966237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:09.970 [2024-05-12 04:59:16.966248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:09.970 [2024-05-12 04:59:16.966262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:09.970 [2024-05-12 04:59:16.966293] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:09.970 [2024-05-12 04:59:16.966328] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c7a363c9-0ae1-4712-9c4d-0d43e1b35de1 00:19:09.970 [2024-05-12 04:59:16.966343] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:09.970 [2024-05-12 04:59:16.966356] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:09.970 [2024-05-12 04:59:16.966367] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:09.970 [2024-05-12 04:59:16.966379] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:09.970 [2024-05-12 04:59:16.966389] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:09.970 [2024-05-12 04:59:16.966402] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:09.970 [2024-05-12 04:59:16.966413] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:09.970 [2024-05-12 04:59:16.966424] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:09.970 [2024-05-12 04:59:16.966434] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:09.970 [2024-05-12 04:59:16.966447] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.970 [2024-05-12 04:59:16.966458] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:09.970 [2024-05-12 04:59:16.966471] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.584 ms 00:19:09.970 [2024-05-12 04:59:16.966482] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.970 [2024-05-12 04:59:16.982213] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.970 [2024-05-12 04:59:16.982311] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:09.970 [2024-05-12 04:59:16.982349] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.689 ms 00:19:09.970 [2024-05-12 04:59:16.982361] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.970 [2024-05-12 04:59:16.982649] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.970 [2024-05-12 04:59:16.982673] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:09.970 [2024-05-12 04:59:16.982688] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.209 ms 00:19:09.970 [2024-05-12 04:59:16.982700] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.970 [2024-05-12 04:59:17.035046] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:09.970 [2024-05-12 04:59:17.035088] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:09.970 [2024-05-12 04:59:17.035123] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:09.970 [2024-05-12 04:59:17.035133] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.970 [2024-05-12 04:59:17.035225] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:09.970 [2024-05-12 04:59:17.035278] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:09.970 [2024-05-12 04:59:17.035293] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:09.970 [2024-05-12 04:59:17.035303] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.970 [2024-05-12 04:59:17.035382] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:09.970 [2024-05-12 04:59:17.035400] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:09.970 [2024-05-12 04:59:17.035416] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:09.970 [2024-05-12 04:59:17.035426] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.970 [2024-05-12 04:59:17.035452] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:09.970 [2024-05-12 04:59:17.035465] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:09.970 [2024-05-12 04:59:17.035478] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:09.970 [2024-05-12 04:59:17.035488] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.229 [2024-05-12 04:59:17.128317] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.229 [2024-05-12 04:59:17.128414] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:10.229 [2024-05-12 04:59:17.128450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.229 [2024-05-12 04:59:17.128461] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.229 [2024-05-12 04:59:17.165707] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.229 [2024-05-12 04:59:17.165745] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:10.229 [2024-05-12 04:59:17.165779] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.229 [2024-05-12 04:59:17.165790] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.229 [2024-05-12 04:59:17.165877] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.229 [2024-05-12 04:59:17.165893] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:10.229 [2024-05-12 04:59:17.165908] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.229 [2024-05-12 04:59:17.165919] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.229 [2024-05-12 04:59:17.165954] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.230 [2024-05-12 04:59:17.165967] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:10.230 [2024-05-12 04:59:17.165980] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.230 [2024-05-12 04:59:17.165990] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.230 [2024-05-12 04:59:17.166101] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.230 [2024-05-12 04:59:17.166120] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:10.230 [2024-05-12 04:59:17.166135] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.230 [2024-05-12 04:59:17.166145] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.230 [2024-05-12 04:59:17.166195] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.230 [2024-05-12 04:59:17.166211] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:10.230 [2024-05-12 04:59:17.166224] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.230 [2024-05-12 04:59:17.166250] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.230 [2024-05-12 04:59:17.166332] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.230 [2024-05-12 04:59:17.166351] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:10.230 [2024-05-12 04:59:17.166367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.230 [2024-05-12 04:59:17.166377] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.230 [2024-05-12 04:59:17.166431] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.230 [2024-05-12 04:59:17.166446] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:10.230 [2024-05-12 04:59:17.166459] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.230 [2024-05-12 04:59:17.166469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.230 [2024-05-12 04:59:17.166639] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 288.003 ms, result 0 00:19:11.166 04:59:18 -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:11.425 [2024-05-12 04:59:18.354700] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:11.425 [2024-05-12 04:59:18.354859] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73892 ] 00:19:11.425 [2024-05-12 04:59:18.524604] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.683 [2024-05-12 04:59:18.709371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.942 [2024-05-12 04:59:19.015438] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:11.942 [2024-05-12 04:59:19.015543] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:12.202 [2024-05-12 04:59:19.170855] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.202 [2024-05-12 04:59:19.170907] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:12.202 [2024-05-12 04:59:19.170944] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:12.202 [2024-05-12 04:59:19.170969] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.202 [2024-05-12 04:59:19.174335] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.202 [2024-05-12 04:59:19.174408] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:12.202 [2024-05-12 04:59:19.174458] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.337 ms 00:19:12.202 [2024-05-12 04:59:19.174482] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.202 [2024-05-12 04:59:19.174604] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:12.202 [2024-05-12 04:59:19.175579] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:12.202 [2024-05-12 04:59:19.175621] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.202 [2024-05-12 04:59:19.175642] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:12.202 [2024-05-12 04:59:19.175655] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.027 ms 00:19:12.202 [2024-05-12 04:59:19.175667] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.202 [2024-05-12 04:59:19.176921] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:12.202 [2024-05-12 04:59:19.193118] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.202 [2024-05-12 04:59:19.193163] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:12.202 [2024-05-12 04:59:19.193182] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.198 ms 00:19:12.202 [2024-05-12 04:59:19.193194] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.202 [2024-05-12 04:59:19.193322] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.202 [2024-05-12 04:59:19.193345] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:12.202 [2024-05-12 04:59:19.193362] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:19:12.202 [2024-05-12 04:59:19.193374] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.202 [2024-05-12 04:59:19.197787] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.202 [2024-05-12 04:59:19.197824] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:12.202 [2024-05-12 04:59:19.197856] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.340 ms 00:19:12.202 [2024-05-12 04:59:19.197867] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.202 [2024-05-12 04:59:19.197991] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.202 [2024-05-12 04:59:19.198015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:12.202 [2024-05-12 04:59:19.198028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:19:12.202 [2024-05-12 04:59:19.198038] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.202 [2024-05-12 04:59:19.198074] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.202 [2024-05-12 04:59:19.198089] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:12.202 [2024-05-12 04:59:19.198101] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:12.202 [2024-05-12 04:59:19.198112] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.202 [2024-05-12 04:59:19.198145] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:12.202 [2024-05-12 04:59:19.202556] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.202 [2024-05-12 04:59:19.202594] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:12.202 [2024-05-12 04:59:19.202642] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.423 ms 00:19:12.202 [2024-05-12 04:59:19.202653] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.202 [2024-05-12 04:59:19.202719] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.202 [2024-05-12 04:59:19.202741] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:12.202 [2024-05-12 04:59:19.202754] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:12.202 [2024-05-12 04:59:19.202764] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.202 [2024-05-12 04:59:19.202795] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:12.202 [2024-05-12 04:59:19.202822] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:19:12.202 [2024-05-12 04:59:19.202860] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:12.202 [2024-05-12 04:59:19.202880] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:19:12.202 [2024-05-12 04:59:19.202960] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:12.202 [2024-05-12 04:59:19.202975] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:12.202 [2024-05-12 04:59:19.202989] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:12.202 [2024-05-12 04:59:19.203003] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:12.202 [2024-05-12 04:59:19.203016] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:12.202 [2024-05-12 04:59:19.203028] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:12.202 [2024-05-12 04:59:19.203038] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:12.202 [2024-05-12 04:59:19.203049] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:12.202 [2024-05-12 04:59:19.203059] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:12.202 [2024-05-12 04:59:19.203071] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.202 [2024-05-12 04:59:19.203082] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:12.202 [2024-05-12 04:59:19.203099] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:19:12.202 [2024-05-12 04:59:19.203109] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.202 [2024-05-12 04:59:19.203184] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.202 [2024-05-12 04:59:19.203199] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:12.202 [2024-05-12 04:59:19.203211] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:19:12.202 [2024-05-12 04:59:19.203222] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.202 [2024-05-12 04:59:19.203371] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:12.202 [2024-05-12 04:59:19.203392] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:12.202 [2024-05-12 04:59:19.203405] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:12.202 [2024-05-12 04:59:19.203422] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:12.202 [2024-05-12 04:59:19.203451] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:12.202 [2024-05-12 04:59:19.203463] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:12.202 [2024-05-12 04:59:19.203474] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:12.202 [2024-05-12 04:59:19.203486] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:12.202 [2024-05-12 04:59:19.203497] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:12.202 [2024-05-12 04:59:19.203508] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:12.202 [2024-05-12 04:59:19.203519] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:12.202 [2024-05-12 04:59:19.203530] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:12.202 [2024-05-12 04:59:19.203540] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:12.202 [2024-05-12 04:59:19.203550] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:12.202 [2024-05-12 04:59:19.203563] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:19:12.202 [2024-05-12 04:59:19.203574] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:12.202 [2024-05-12 04:59:19.203585] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:12.202 [2024-05-12 04:59:19.203596] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:19:12.202 [2024-05-12 04:59:19.203606] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:12.202 [2024-05-12 04:59:19.203629] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:12.202 [2024-05-12 04:59:19.203640] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:19:12.202 [2024-05-12 04:59:19.203652] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:12.202 [2024-05-12 04:59:19.203662] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:12.202 [2024-05-12 04:59:19.203673] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:12.202 [2024-05-12 04:59:19.203683] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:12.202 [2024-05-12 04:59:19.203694] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:12.202 [2024-05-12 04:59:19.203705] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:19:12.202 [2024-05-12 04:59:19.203715] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:12.202 [2024-05-12 04:59:19.203725] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:12.202 [2024-05-12 04:59:19.203736] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:12.202 [2024-05-12 04:59:19.203746] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:12.202 [2024-05-12 04:59:19.203757] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:12.202 [2024-05-12 04:59:19.203767] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:19:12.202 [2024-05-12 04:59:19.203778] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:12.202 [2024-05-12 04:59:19.203788] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:12.202 [2024-05-12 04:59:19.203798] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:12.202 [2024-05-12 04:59:19.203809] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:12.202 [2024-05-12 04:59:19.203819] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:12.202 [2024-05-12 04:59:19.203830] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:19:12.202 [2024-05-12 04:59:19.203840] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:12.202 [2024-05-12 04:59:19.203850] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:12.203 [2024-05-12 04:59:19.203862] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:12.203 [2024-05-12 04:59:19.203874] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:12.203 [2024-05-12 04:59:19.203886] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:12.203 [2024-05-12 04:59:19.203898] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:12.203 [2024-05-12 04:59:19.203928] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:12.203 [2024-05-12 04:59:19.203940] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:12.203 [2024-05-12 04:59:19.203951] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:12.203 [2024-05-12 04:59:19.203962] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:12.203 [2024-05-12 04:59:19.203973] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:12.203 [2024-05-12 04:59:19.203984] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:12.203 [2024-05-12 04:59:19.204005] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:12.203 [2024-05-12 04:59:19.204018] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:12.203 [2024-05-12 04:59:19.204030] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:19:12.203 [2024-05-12 04:59:19.204041] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:19:12.203 [2024-05-12 04:59:19.204053] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:19:12.203 [2024-05-12 04:59:19.204064] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:19:12.203 [2024-05-12 04:59:19.204076] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:19:12.203 [2024-05-12 04:59:19.204087] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:19:12.203 [2024-05-12 04:59:19.204099] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:19:12.203 [2024-05-12 04:59:19.204111] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:19:12.203 [2024-05-12 04:59:19.204122] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:19:12.203 [2024-05-12 04:59:19.204133] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:19:12.203 [2024-05-12 04:59:19.204145] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:19:12.203 [2024-05-12 04:59:19.204157] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:19:12.203 [2024-05-12 04:59:19.204168] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:12.203 [2024-05-12 04:59:19.204182] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:12.203 [2024-05-12 04:59:19.204195] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:12.203 [2024-05-12 04:59:19.204208] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:12.203 [2024-05-12 04:59:19.204237] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:12.203 [2024-05-12 04:59:19.204267] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:12.203 [2024-05-12 04:59:19.204281] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.203 [2024-05-12 04:59:19.204314] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:12.203 [2024-05-12 04:59:19.204327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.954 ms 00:19:12.203 [2024-05-12 04:59:19.204337] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.203 [2024-05-12 04:59:19.223148] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.203 [2024-05-12 04:59:19.223397] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:12.203 [2024-05-12 04:59:19.223524] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.749 ms 00:19:12.203 [2024-05-12 04:59:19.223575] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.203 [2024-05-12 04:59:19.223852] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.203 [2024-05-12 04:59:19.223927] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:12.203 [2024-05-12 04:59:19.224116] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:19:12.203 [2024-05-12 04:59:19.224170] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.203 [2024-05-12 04:59:19.275394] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.203 [2024-05-12 04:59:19.275563] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:12.203 [2024-05-12 04:59:19.275682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.144 ms 00:19:12.203 [2024-05-12 04:59:19.275734] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.203 [2024-05-12 04:59:19.275953] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.203 [2024-05-12 04:59:19.276086] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:12.203 [2024-05-12 04:59:19.276209] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:12.203 [2024-05-12 04:59:19.276332] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.203 [2024-05-12 04:59:19.276723] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.203 [2024-05-12 04:59:19.276852] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:12.203 [2024-05-12 04:59:19.276959] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:19:12.203 [2024-05-12 04:59:19.277091] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.203 [2024-05-12 04:59:19.277309] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.203 [2024-05-12 04:59:19.277376] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:12.203 [2024-05-12 04:59:19.277480] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:19:12.203 [2024-05-12 04:59:19.277612] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.203 [2024-05-12 04:59:19.295168] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.203 [2024-05-12 04:59:19.295346] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:12.203 [2024-05-12 04:59:19.295463] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.476 ms 00:19:12.203 [2024-05-12 04:59:19.295514] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.203 [2024-05-12 04:59:19.312293] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:12.203 [2024-05-12 04:59:19.312487] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:12.203 [2024-05-12 04:59:19.312623] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.203 [2024-05-12 04:59:19.312669] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:12.203 [2024-05-12 04:59:19.312793] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.826 ms 00:19:12.203 [2024-05-12 04:59:19.312841] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.462 [2024-05-12 04:59:19.342591] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.462 [2024-05-12 04:59:19.342766] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:12.462 [2024-05-12 04:59:19.342809] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.632 ms 00:19:12.462 [2024-05-12 04:59:19.342830] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.462 [2024-05-12 04:59:19.358806] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.462 [2024-05-12 04:59:19.358846] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:12.462 [2024-05-12 04:59:19.358877] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.884 ms 00:19:12.462 [2024-05-12 04:59:19.358888] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.462 [2024-05-12 04:59:19.374575] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.462 [2024-05-12 04:59:19.374625] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:12.462 [2024-05-12 04:59:19.374657] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.606 ms 00:19:12.462 [2024-05-12 04:59:19.374668] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.462 [2024-05-12 04:59:19.375183] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.462 [2024-05-12 04:59:19.375252] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:12.462 [2024-05-12 04:59:19.375269] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.343 ms 00:19:12.462 [2024-05-12 04:59:19.375281] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.462 [2024-05-12 04:59:19.450345] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.462 [2024-05-12 04:59:19.450412] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:12.462 [2024-05-12 04:59:19.450432] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.017 ms 00:19:12.462 [2024-05-12 04:59:19.450444] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.462 [2024-05-12 04:59:19.462833] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:12.462 [2024-05-12 04:59:19.476319] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.462 [2024-05-12 04:59:19.476377] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:12.462 [2024-05-12 04:59:19.476397] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.751 ms 00:19:12.462 [2024-05-12 04:59:19.476408] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.462 [2024-05-12 04:59:19.476529] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.462 [2024-05-12 04:59:19.476548] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:12.462 [2024-05-12 04:59:19.476561] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:12.462 [2024-05-12 04:59:19.476571] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.462 [2024-05-12 04:59:19.476635] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.462 [2024-05-12 04:59:19.476655] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:12.462 [2024-05-12 04:59:19.476667] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:19:12.462 [2024-05-12 04:59:19.476677] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.462 [2024-05-12 04:59:19.478730] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.462 [2024-05-12 04:59:19.478782] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:12.462 [2024-05-12 04:59:19.478797] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.029 ms 00:19:12.462 [2024-05-12 04:59:19.478807] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.462 [2024-05-12 04:59:19.478846] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.462 [2024-05-12 04:59:19.478861] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:12.462 [2024-05-12 04:59:19.478873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:12.462 [2024-05-12 04:59:19.478889] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.462 [2024-05-12 04:59:19.478929] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:12.462 [2024-05-12 04:59:19.478945] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.462 [2024-05-12 04:59:19.478971] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:12.462 [2024-05-12 04:59:19.478982] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:12.462 [2024-05-12 04:59:19.478993] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.462 [2024-05-12 04:59:19.510288] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.462 [2024-05-12 04:59:19.510344] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:12.462 [2024-05-12 04:59:19.510371] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.265 ms 00:19:12.462 [2024-05-12 04:59:19.510418] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.462 [2024-05-12 04:59:19.510555] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.462 [2024-05-12 04:59:19.510575] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:12.462 [2024-05-12 04:59:19.510589] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:12.462 [2024-05-12 04:59:19.510601] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.462 [2024-05-12 04:59:19.511560] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:12.462 [2024-05-12 04:59:19.515675] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 340.355 ms, result 0 00:19:12.462 [2024-05-12 04:59:19.516547] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:12.462 [2024-05-12 04:59:19.533601] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:23.972  Copying: 27/256 [MB] (27 MBps) Copying: 49/256 [MB] (22 MBps) Copying: 71/256 [MB] (21 MBps) Copying: 93/256 [MB] (21 MBps) Copying: 114/256 [MB] (21 MBps) Copying: 136/256 [MB] (22 MBps) Copying: 159/256 [MB] (22 MBps) Copying: 181/256 [MB] (22 MBps) Copying: 203/256 [MB] (22 MBps) Copying: 225/256 [MB] (22 MBps) Copying: 249/256 [MB] (23 MBps) Copying: 256/256 [MB] (average 22 MBps)[2024-05-12 04:59:30.963461] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:23.972 [2024-05-12 04:59:30.980395] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.972 [2024-05-12 04:59:30.980569] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:23.972 [2024-05-12 04:59:30.980758] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:23.972 [2024-05-12 04:59:30.980960] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.972 [2024-05-12 04:59:30.981045] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:23.972 [2024-05-12 04:59:30.984899] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.972 [2024-05-12 04:59:30.985046] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:23.972 [2024-05-12 04:59:30.985209] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.822 ms 00:19:23.972 [2024-05-12 04:59:30.985274] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.972 [2024-05-12 04:59:30.985765] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.972 [2024-05-12 04:59:30.985920] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:23.972 [2024-05-12 04:59:30.986085] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:19:23.972 [2024-05-12 04:59:30.986133] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.972 [2024-05-12 04:59:30.990454] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.972 [2024-05-12 04:59:30.990651] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:23.972 [2024-05-12 04:59:30.990805] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.136 ms 00:19:23.972 [2024-05-12 04:59:30.990922] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.972 [2024-05-12 04:59:30.999513] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.972 [2024-05-12 04:59:30.999625] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:23.972 [2024-05-12 04:59:30.999657] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.492 ms 00:19:23.972 [2024-05-12 04:59:30.999684] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.972 [2024-05-12 04:59:31.033069] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.972 [2024-05-12 04:59:31.033115] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:23.972 [2024-05-12 04:59:31.033133] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.292 ms 00:19:23.972 [2024-05-12 04:59:31.033145] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.972 [2024-05-12 04:59:31.053147] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.972 [2024-05-12 04:59:31.053245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:23.972 [2024-05-12 04:59:31.053283] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.897 ms 00:19:23.972 [2024-05-12 04:59:31.053295] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.972 [2024-05-12 04:59:31.053516] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.972 [2024-05-12 04:59:31.053545] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:23.972 [2024-05-12 04:59:31.053560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:19:23.972 [2024-05-12 04:59:31.053572] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.972 [2024-05-12 04:59:31.088747] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.972 [2024-05-12 04:59:31.088793] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:23.972 [2024-05-12 04:59:31.088826] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.150 ms 00:19:23.972 [2024-05-12 04:59:31.088838] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.233 [2024-05-12 04:59:31.124163] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.233 [2024-05-12 04:59:31.124207] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:24.233 [2024-05-12 04:59:31.124239] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.240 ms 00:19:24.233 [2024-05-12 04:59:31.124253] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.233 [2024-05-12 04:59:31.157784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.233 [2024-05-12 04:59:31.157874] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:24.233 [2024-05-12 04:59:31.157892] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.448 ms 00:19:24.233 [2024-05-12 04:59:31.157903] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.233 [2024-05-12 04:59:31.191451] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.233 [2024-05-12 04:59:31.191494] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:24.233 [2024-05-12 04:59:31.191512] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.431 ms 00:19:24.233 [2024-05-12 04:59:31.191525] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.233 [2024-05-12 04:59:31.191608] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:24.233 [2024-05-12 04:59:31.191678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.191692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.191703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.191715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.191725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.191736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.191747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.191773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.191786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.191798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.191810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.191822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.191834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.191846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.191858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.191870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.191882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.191894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.191907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.191931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.191944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.191956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.191968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.191980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.191993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.192004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.192018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.192030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.192043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.192055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.192067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.192079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.192091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.192104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.192116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.192128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.192140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.192152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.192164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.192176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.192189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.192201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.192212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.192281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.192293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.192304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.192331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.192344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.192356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.192368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.192379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.192391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.192403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.192415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.192426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.192438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.192450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.192462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.192473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.192485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.192496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.192508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.192519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.192531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.192543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.192557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.192569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.192581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.192592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:24.233 [2024-05-12 04:59:31.192604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:24.234 [2024-05-12 04:59:31.192616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:24.234 [2024-05-12 04:59:31.192628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:24.234 [2024-05-12 04:59:31.192640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:24.234 [2024-05-12 04:59:31.192653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:24.234 [2024-05-12 04:59:31.192665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:24.234 [2024-05-12 04:59:31.192677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:24.234 [2024-05-12 04:59:31.192689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:24.234 [2024-05-12 04:59:31.192701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:24.234 [2024-05-12 04:59:31.192712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:24.234 [2024-05-12 04:59:31.192725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:24.234 [2024-05-12 04:59:31.192737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:24.234 [2024-05-12 04:59:31.192778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:24.234 [2024-05-12 04:59:31.192789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:24.234 [2024-05-12 04:59:31.192799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:24.234 [2024-05-12 04:59:31.192810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:24.234 [2024-05-12 04:59:31.192821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:24.234 [2024-05-12 04:59:31.192831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:24.234 [2024-05-12 04:59:31.192851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:24.234 [2024-05-12 04:59:31.192861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:24.234 [2024-05-12 04:59:31.192872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:24.234 [2024-05-12 04:59:31.192883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:24.234 [2024-05-12 04:59:31.192894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:24.234 [2024-05-12 04:59:31.192905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:24.234 [2024-05-12 04:59:31.192916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:24.234 [2024-05-12 04:59:31.192926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:24.234 [2024-05-12 04:59:31.192937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:24.234 [2024-05-12 04:59:31.192948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:24.234 [2024-05-12 04:59:31.192960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:24.234 [2024-05-12 04:59:31.192971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:24.234 [2024-05-12 04:59:31.192982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:24.234 [2024-05-12 04:59:31.193000] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:24.234 [2024-05-12 04:59:31.193046] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c7a363c9-0ae1-4712-9c4d-0d43e1b35de1 00:19:24.234 [2024-05-12 04:59:31.193075] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:24.234 [2024-05-12 04:59:31.193103] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:24.234 [2024-05-12 04:59:31.193114] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:24.234 [2024-05-12 04:59:31.193126] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:24.234 [2024-05-12 04:59:31.193137] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:24.234 [2024-05-12 04:59:31.193149] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:24.234 [2024-05-12 04:59:31.193160] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:24.234 [2024-05-12 04:59:31.193170] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:24.234 [2024-05-12 04:59:31.193181] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:24.234 [2024-05-12 04:59:31.193193] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.234 [2024-05-12 04:59:31.193212] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:24.234 [2024-05-12 04:59:31.193225] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.587 ms 00:19:24.234 [2024-05-12 04:59:31.193237] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.234 [2024-05-12 04:59:31.211336] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.234 [2024-05-12 04:59:31.211373] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:24.234 [2024-05-12 04:59:31.211405] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.057 ms 00:19:24.234 [2024-05-12 04:59:31.211416] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.234 [2024-05-12 04:59:31.211658] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.234 [2024-05-12 04:59:31.211681] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:24.234 [2024-05-12 04:59:31.211694] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:19:24.234 [2024-05-12 04:59:31.211704] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.234 [2024-05-12 04:59:31.258438] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.234 [2024-05-12 04:59:31.258485] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:24.234 [2024-05-12 04:59:31.258517] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.234 [2024-05-12 04:59:31.258528] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.234 [2024-05-12 04:59:31.258656] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.234 [2024-05-12 04:59:31.258673] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:24.234 [2024-05-12 04:59:31.258685] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.234 [2024-05-12 04:59:31.258695] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.234 [2024-05-12 04:59:31.258752] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.234 [2024-05-12 04:59:31.258768] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:24.234 [2024-05-12 04:59:31.258779] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.234 [2024-05-12 04:59:31.258789] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.234 [2024-05-12 04:59:31.258811] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.234 [2024-05-12 04:59:31.258830] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:24.234 [2024-05-12 04:59:31.258841] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.234 [2024-05-12 04:59:31.258867] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.234 [2024-05-12 04:59:31.356091] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.234 [2024-05-12 04:59:31.356162] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:24.234 [2024-05-12 04:59:31.356182] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.234 [2024-05-12 04:59:31.356194] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.494 [2024-05-12 04:59:31.395220] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.494 [2024-05-12 04:59:31.395322] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:24.494 [2024-05-12 04:59:31.395357] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.494 [2024-05-12 04:59:31.395368] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.494 [2024-05-12 04:59:31.395467] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.494 [2024-05-12 04:59:31.395485] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:24.494 [2024-05-12 04:59:31.395498] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.494 [2024-05-12 04:59:31.395509] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.494 [2024-05-12 04:59:31.395544] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.494 [2024-05-12 04:59:31.395558] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:24.494 [2024-05-12 04:59:31.395577] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.494 [2024-05-12 04:59:31.395592] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.494 [2024-05-12 04:59:31.395736] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.494 [2024-05-12 04:59:31.395755] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:24.494 [2024-05-12 04:59:31.395767] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.494 [2024-05-12 04:59:31.395778] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.494 [2024-05-12 04:59:31.395833] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.494 [2024-05-12 04:59:31.395889] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:24.494 [2024-05-12 04:59:31.395903] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.494 [2024-05-12 04:59:31.395937] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.494 [2024-05-12 04:59:31.395986] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.494 [2024-05-12 04:59:31.396001] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:24.494 [2024-05-12 04:59:31.396014] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.494 [2024-05-12 04:59:31.396026] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.494 [2024-05-12 04:59:31.396079] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.494 [2024-05-12 04:59:31.396096] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:24.494 [2024-05-12 04:59:31.396114] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.494 [2024-05-12 04:59:31.396143] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.494 [2024-05-12 04:59:31.396381] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 415.976 ms, result 0 00:19:25.430 00:19:25.430 00:19:25.430 04:59:32 -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:25.996 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:19:25.996 04:59:32 -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:19:25.996 04:59:32 -- ftl/trim.sh@109 -- # fio_kill 00:19:25.996 04:59:32 -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:25.996 04:59:32 -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:25.996 04:59:33 -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:19:25.996 04:59:33 -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:25.996 04:59:33 -- ftl/trim.sh@20 -- # killprocess 73815 00:19:25.996 Process with pid 73815 is not found 00:19:25.996 04:59:33 -- common/autotest_common.sh@926 -- # '[' -z 73815 ']' 00:19:25.996 04:59:33 -- common/autotest_common.sh@930 -- # kill -0 73815 00:19:25.996 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (73815) - No such process 00:19:25.996 04:59:33 -- common/autotest_common.sh@953 -- # echo 'Process with pid 73815 is not found' 00:19:25.996 ************************************ 00:19:25.996 END TEST ftl_trim 00:19:25.996 ************************************ 00:19:25.996 00:19:25.996 real 1m10.885s 00:19:25.996 user 1m37.561s 00:19:25.996 sys 0m6.533s 00:19:25.996 04:59:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:25.996 04:59:33 -- common/autotest_common.sh@10 -- # set +x 00:19:25.996 04:59:33 -- ftl/ftl.sh@77 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:06.0 0000:00:07.0 00:19:25.996 04:59:33 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:19:25.996 04:59:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:25.996 04:59:33 -- common/autotest_common.sh@10 -- # set +x 00:19:26.254 ************************************ 00:19:26.254 START TEST ftl_restore 00:19:26.254 ************************************ 00:19:26.254 04:59:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:06.0 0000:00:07.0 00:19:26.254 * Looking for test storage... 00:19:26.254 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:26.254 04:59:33 -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:26.254 04:59:33 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:19:26.254 04:59:33 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:26.254 04:59:33 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:26.254 04:59:33 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:26.254 04:59:33 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:26.254 04:59:33 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:26.254 04:59:33 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:26.254 04:59:33 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:26.254 04:59:33 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:26.254 04:59:33 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:26.254 04:59:33 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:26.254 04:59:33 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:26.254 04:59:33 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:26.254 04:59:33 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:26.254 04:59:33 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:26.254 04:59:33 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:26.254 04:59:33 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:26.254 04:59:33 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:26.254 04:59:33 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:26.254 04:59:33 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:26.254 04:59:33 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:26.254 04:59:33 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:26.254 04:59:33 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:26.254 04:59:33 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:26.254 04:59:33 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:26.254 04:59:33 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:26.255 04:59:33 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:26.255 04:59:33 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:26.255 04:59:33 -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:26.255 04:59:33 -- ftl/restore.sh@13 -- # mktemp -d 00:19:26.255 04:59:33 -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.Jw2eFIkXe8 00:19:26.255 04:59:33 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:19:26.255 04:59:33 -- ftl/restore.sh@16 -- # case $opt in 00:19:26.255 04:59:33 -- ftl/restore.sh@18 -- # nv_cache=0000:00:06.0 00:19:26.255 04:59:33 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:19:26.255 04:59:33 -- ftl/restore.sh@23 -- # shift 2 00:19:26.255 04:59:33 -- ftl/restore.sh@24 -- # device=0000:00:07.0 00:19:26.255 04:59:33 -- ftl/restore.sh@25 -- # timeout=240 00:19:26.255 04:59:33 -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:19:26.255 04:59:33 -- ftl/restore.sh@39 -- # svcpid=74094 00:19:26.255 04:59:33 -- ftl/restore.sh@41 -- # waitforlisten 74094 00:19:26.255 04:59:33 -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:26.255 04:59:33 -- common/autotest_common.sh@819 -- # '[' -z 74094 ']' 00:19:26.255 04:59:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:26.255 04:59:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:26.255 04:59:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.255 04:59:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:26.255 04:59:33 -- common/autotest_common.sh@10 -- # set +x 00:19:26.255 [2024-05-12 04:59:33.344440] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:26.255 [2024-05-12 04:59:33.344588] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74094 ] 00:19:26.513 [2024-05-12 04:59:33.505906] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.771 [2024-05-12 04:59:33.690444] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:26.771 [2024-05-12 04:59:33.690666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.146 04:59:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:28.146 04:59:34 -- common/autotest_common.sh@852 -- # return 0 00:19:28.146 04:59:34 -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:19:28.146 04:59:34 -- ftl/common.sh@54 -- # local name=nvme0 00:19:28.146 04:59:34 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:19:28.146 04:59:34 -- ftl/common.sh@56 -- # local size=103424 00:19:28.146 04:59:34 -- ftl/common.sh@59 -- # local base_bdev 00:19:28.146 04:59:34 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:19:28.405 04:59:35 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:28.405 04:59:35 -- ftl/common.sh@62 -- # local base_size 00:19:28.405 04:59:35 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:28.405 04:59:35 -- common/autotest_common.sh@1357 -- # local bdev_name=nvme0n1 00:19:28.405 04:59:35 -- common/autotest_common.sh@1358 -- # local bdev_info 00:19:28.405 04:59:35 -- common/autotest_common.sh@1359 -- # local bs 00:19:28.405 04:59:35 -- common/autotest_common.sh@1360 -- # local nb 00:19:28.405 04:59:35 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:28.405 04:59:35 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:19:28.405 { 00:19:28.405 "name": "nvme0n1", 00:19:28.405 "aliases": [ 00:19:28.405 "fab92459-4b8f-4c7e-9456-88fb1e73824b" 00:19:28.405 ], 00:19:28.405 "product_name": "NVMe disk", 00:19:28.405 "block_size": 4096, 00:19:28.405 "num_blocks": 1310720, 00:19:28.405 "uuid": "fab92459-4b8f-4c7e-9456-88fb1e73824b", 00:19:28.405 "assigned_rate_limits": { 00:19:28.405 "rw_ios_per_sec": 0, 00:19:28.406 "rw_mbytes_per_sec": 0, 00:19:28.406 "r_mbytes_per_sec": 0, 00:19:28.406 "w_mbytes_per_sec": 0 00:19:28.406 }, 00:19:28.406 "claimed": true, 00:19:28.406 "claim_type": "read_many_write_one", 00:19:28.406 "zoned": false, 00:19:28.406 "supported_io_types": { 00:19:28.406 "read": true, 00:19:28.406 "write": true, 00:19:28.406 "unmap": true, 00:19:28.406 "write_zeroes": true, 00:19:28.406 "flush": true, 00:19:28.406 "reset": true, 00:19:28.406 "compare": true, 00:19:28.406 "compare_and_write": false, 00:19:28.406 "abort": true, 00:19:28.406 "nvme_admin": true, 00:19:28.406 "nvme_io": true 00:19:28.406 }, 00:19:28.406 "driver_specific": { 00:19:28.406 "nvme": [ 00:19:28.406 { 00:19:28.406 "pci_address": "0000:00:07.0", 00:19:28.406 "trid": { 00:19:28.406 "trtype": "PCIe", 00:19:28.406 "traddr": "0000:00:07.0" 00:19:28.406 }, 00:19:28.406 "ctrlr_data": { 00:19:28.406 "cntlid": 0, 00:19:28.406 "vendor_id": "0x1b36", 00:19:28.406 "model_number": "QEMU NVMe Ctrl", 00:19:28.406 "serial_number": "12341", 00:19:28.406 "firmware_revision": "8.0.0", 00:19:28.406 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:28.406 "oacs": { 00:19:28.406 "security": 0, 00:19:28.406 "format": 1, 00:19:28.406 "firmware": 0, 00:19:28.406 "ns_manage": 1 00:19:28.406 }, 00:19:28.406 "multi_ctrlr": false, 00:19:28.406 "ana_reporting": false 00:19:28.406 }, 00:19:28.406 "vs": { 00:19:28.406 "nvme_version": "1.4" 00:19:28.406 }, 00:19:28.406 "ns_data": { 00:19:28.406 "id": 1, 00:19:28.406 "can_share": false 00:19:28.406 } 00:19:28.406 } 00:19:28.406 ], 00:19:28.406 "mp_policy": "active_passive" 00:19:28.406 } 00:19:28.406 } 00:19:28.406 ]' 00:19:28.406 04:59:35 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:19:28.664 04:59:35 -- common/autotest_common.sh@1362 -- # bs=4096 00:19:28.664 04:59:35 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:19:28.664 04:59:35 -- common/autotest_common.sh@1363 -- # nb=1310720 00:19:28.664 04:59:35 -- common/autotest_common.sh@1366 -- # bdev_size=5120 00:19:28.664 04:59:35 -- common/autotest_common.sh@1367 -- # echo 5120 00:19:28.664 04:59:35 -- ftl/common.sh@63 -- # base_size=5120 00:19:28.664 04:59:35 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:28.664 04:59:35 -- ftl/common.sh@67 -- # clear_lvols 00:19:28.664 04:59:35 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:28.664 04:59:35 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:28.923 04:59:35 -- ftl/common.sh@28 -- # stores=aee12a09-dab9-45ac-a5c8-31795c6f7c43 00:19:28.923 04:59:35 -- ftl/common.sh@29 -- # for lvs in $stores 00:19:28.923 04:59:35 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u aee12a09-dab9-45ac-a5c8-31795c6f7c43 00:19:29.182 04:59:36 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:29.441 04:59:36 -- ftl/common.sh@68 -- # lvs=78a4fee3-1fa1-4264-bfdd-452bff401b0f 00:19:29.441 04:59:36 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 78a4fee3-1fa1-4264-bfdd-452bff401b0f 00:19:29.700 04:59:36 -- ftl/restore.sh@43 -- # split_bdev=bc4f335f-23e7-4c14-8ab5-99bf6e0c46ea 00:19:29.700 04:59:36 -- ftl/restore.sh@44 -- # '[' -n 0000:00:06.0 ']' 00:19:29.700 04:59:36 -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:06.0 bc4f335f-23e7-4c14-8ab5-99bf6e0c46ea 00:19:29.700 04:59:36 -- ftl/common.sh@35 -- # local name=nvc0 00:19:29.700 04:59:36 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:19:29.700 04:59:36 -- ftl/common.sh@37 -- # local base_bdev=bc4f335f-23e7-4c14-8ab5-99bf6e0c46ea 00:19:29.700 04:59:36 -- ftl/common.sh@38 -- # local cache_size= 00:19:29.700 04:59:36 -- ftl/common.sh@41 -- # get_bdev_size bc4f335f-23e7-4c14-8ab5-99bf6e0c46ea 00:19:29.700 04:59:36 -- common/autotest_common.sh@1357 -- # local bdev_name=bc4f335f-23e7-4c14-8ab5-99bf6e0c46ea 00:19:29.700 04:59:36 -- common/autotest_common.sh@1358 -- # local bdev_info 00:19:29.700 04:59:36 -- common/autotest_common.sh@1359 -- # local bs 00:19:29.700 04:59:36 -- common/autotest_common.sh@1360 -- # local nb 00:19:29.700 04:59:36 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bc4f335f-23e7-4c14-8ab5-99bf6e0c46ea 00:19:29.958 04:59:36 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:19:29.958 { 00:19:29.958 "name": "bc4f335f-23e7-4c14-8ab5-99bf6e0c46ea", 00:19:29.958 "aliases": [ 00:19:29.958 "lvs/nvme0n1p0" 00:19:29.958 ], 00:19:29.958 "product_name": "Logical Volume", 00:19:29.958 "block_size": 4096, 00:19:29.958 "num_blocks": 26476544, 00:19:29.958 "uuid": "bc4f335f-23e7-4c14-8ab5-99bf6e0c46ea", 00:19:29.958 "assigned_rate_limits": { 00:19:29.958 "rw_ios_per_sec": 0, 00:19:29.958 "rw_mbytes_per_sec": 0, 00:19:29.958 "r_mbytes_per_sec": 0, 00:19:29.958 "w_mbytes_per_sec": 0 00:19:29.958 }, 00:19:29.958 "claimed": false, 00:19:29.958 "zoned": false, 00:19:29.958 "supported_io_types": { 00:19:29.958 "read": true, 00:19:29.958 "write": true, 00:19:29.958 "unmap": true, 00:19:29.958 "write_zeroes": true, 00:19:29.958 "flush": false, 00:19:29.958 "reset": true, 00:19:29.958 "compare": false, 00:19:29.958 "compare_and_write": false, 00:19:29.958 "abort": false, 00:19:29.958 "nvme_admin": false, 00:19:29.958 "nvme_io": false 00:19:29.958 }, 00:19:29.958 "driver_specific": { 00:19:29.958 "lvol": { 00:19:29.958 "lvol_store_uuid": "78a4fee3-1fa1-4264-bfdd-452bff401b0f", 00:19:29.958 "base_bdev": "nvme0n1", 00:19:29.958 "thin_provision": true, 00:19:29.958 "snapshot": false, 00:19:29.958 "clone": false, 00:19:29.958 "esnap_clone": false 00:19:29.958 } 00:19:29.958 } 00:19:29.958 } 00:19:29.958 ]' 00:19:29.958 04:59:36 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:19:29.958 04:59:36 -- common/autotest_common.sh@1362 -- # bs=4096 00:19:29.958 04:59:36 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:19:29.958 04:59:36 -- common/autotest_common.sh@1363 -- # nb=26476544 00:19:29.958 04:59:36 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:19:29.958 04:59:36 -- common/autotest_common.sh@1367 -- # echo 103424 00:19:29.958 04:59:36 -- ftl/common.sh@41 -- # local base_size=5171 00:19:29.958 04:59:36 -- ftl/common.sh@44 -- # local nvc_bdev 00:19:29.958 04:59:36 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:19:30.217 04:59:37 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:30.217 04:59:37 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:30.217 04:59:37 -- ftl/common.sh@48 -- # get_bdev_size bc4f335f-23e7-4c14-8ab5-99bf6e0c46ea 00:19:30.217 04:59:37 -- common/autotest_common.sh@1357 -- # local bdev_name=bc4f335f-23e7-4c14-8ab5-99bf6e0c46ea 00:19:30.217 04:59:37 -- common/autotest_common.sh@1358 -- # local bdev_info 00:19:30.217 04:59:37 -- common/autotest_common.sh@1359 -- # local bs 00:19:30.217 04:59:37 -- common/autotest_common.sh@1360 -- # local nb 00:19:30.217 04:59:37 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bc4f335f-23e7-4c14-8ab5-99bf6e0c46ea 00:19:30.476 04:59:37 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:19:30.476 { 00:19:30.476 "name": "bc4f335f-23e7-4c14-8ab5-99bf6e0c46ea", 00:19:30.476 "aliases": [ 00:19:30.476 "lvs/nvme0n1p0" 00:19:30.476 ], 00:19:30.476 "product_name": "Logical Volume", 00:19:30.476 "block_size": 4096, 00:19:30.476 "num_blocks": 26476544, 00:19:30.476 "uuid": "bc4f335f-23e7-4c14-8ab5-99bf6e0c46ea", 00:19:30.476 "assigned_rate_limits": { 00:19:30.476 "rw_ios_per_sec": 0, 00:19:30.476 "rw_mbytes_per_sec": 0, 00:19:30.476 "r_mbytes_per_sec": 0, 00:19:30.476 "w_mbytes_per_sec": 0 00:19:30.476 }, 00:19:30.476 "claimed": false, 00:19:30.476 "zoned": false, 00:19:30.476 "supported_io_types": { 00:19:30.476 "read": true, 00:19:30.476 "write": true, 00:19:30.476 "unmap": true, 00:19:30.476 "write_zeroes": true, 00:19:30.476 "flush": false, 00:19:30.476 "reset": true, 00:19:30.476 "compare": false, 00:19:30.476 "compare_and_write": false, 00:19:30.476 "abort": false, 00:19:30.476 "nvme_admin": false, 00:19:30.476 "nvme_io": false 00:19:30.476 }, 00:19:30.476 "driver_specific": { 00:19:30.476 "lvol": { 00:19:30.476 "lvol_store_uuid": "78a4fee3-1fa1-4264-bfdd-452bff401b0f", 00:19:30.476 "base_bdev": "nvme0n1", 00:19:30.476 "thin_provision": true, 00:19:30.476 "snapshot": false, 00:19:30.476 "clone": false, 00:19:30.476 "esnap_clone": false 00:19:30.476 } 00:19:30.476 } 00:19:30.476 } 00:19:30.476 ]' 00:19:30.476 04:59:37 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:19:30.476 04:59:37 -- common/autotest_common.sh@1362 -- # bs=4096 00:19:30.476 04:59:37 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:19:30.735 04:59:37 -- common/autotest_common.sh@1363 -- # nb=26476544 00:19:30.735 04:59:37 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:19:30.735 04:59:37 -- common/autotest_common.sh@1367 -- # echo 103424 00:19:30.735 04:59:37 -- ftl/common.sh@48 -- # cache_size=5171 00:19:30.735 04:59:37 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:30.735 04:59:37 -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:19:30.735 04:59:37 -- ftl/restore.sh@48 -- # get_bdev_size bc4f335f-23e7-4c14-8ab5-99bf6e0c46ea 00:19:30.735 04:59:37 -- common/autotest_common.sh@1357 -- # local bdev_name=bc4f335f-23e7-4c14-8ab5-99bf6e0c46ea 00:19:30.735 04:59:37 -- common/autotest_common.sh@1358 -- # local bdev_info 00:19:30.735 04:59:37 -- common/autotest_common.sh@1359 -- # local bs 00:19:30.735 04:59:37 -- common/autotest_common.sh@1360 -- # local nb 00:19:30.735 04:59:37 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bc4f335f-23e7-4c14-8ab5-99bf6e0c46ea 00:19:30.994 04:59:38 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:19:30.994 { 00:19:30.994 "name": "bc4f335f-23e7-4c14-8ab5-99bf6e0c46ea", 00:19:30.994 "aliases": [ 00:19:30.994 "lvs/nvme0n1p0" 00:19:30.994 ], 00:19:30.994 "product_name": "Logical Volume", 00:19:30.994 "block_size": 4096, 00:19:30.994 "num_blocks": 26476544, 00:19:30.994 "uuid": "bc4f335f-23e7-4c14-8ab5-99bf6e0c46ea", 00:19:30.994 "assigned_rate_limits": { 00:19:30.994 "rw_ios_per_sec": 0, 00:19:30.994 "rw_mbytes_per_sec": 0, 00:19:30.994 "r_mbytes_per_sec": 0, 00:19:30.994 "w_mbytes_per_sec": 0 00:19:30.994 }, 00:19:30.994 "claimed": false, 00:19:30.994 "zoned": false, 00:19:30.994 "supported_io_types": { 00:19:30.994 "read": true, 00:19:30.994 "write": true, 00:19:30.994 "unmap": true, 00:19:30.994 "write_zeroes": true, 00:19:30.994 "flush": false, 00:19:30.994 "reset": true, 00:19:30.994 "compare": false, 00:19:30.994 "compare_and_write": false, 00:19:30.994 "abort": false, 00:19:30.994 "nvme_admin": false, 00:19:30.994 "nvme_io": false 00:19:30.994 }, 00:19:30.994 "driver_specific": { 00:19:30.994 "lvol": { 00:19:30.994 "lvol_store_uuid": "78a4fee3-1fa1-4264-bfdd-452bff401b0f", 00:19:30.994 "base_bdev": "nvme0n1", 00:19:30.994 "thin_provision": true, 00:19:30.994 "snapshot": false, 00:19:30.994 "clone": false, 00:19:30.994 "esnap_clone": false 00:19:30.994 } 00:19:30.994 } 00:19:30.994 } 00:19:30.994 ]' 00:19:30.994 04:59:38 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:19:30.994 04:59:38 -- common/autotest_common.sh@1362 -- # bs=4096 00:19:30.994 04:59:38 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:19:31.254 04:59:38 -- common/autotest_common.sh@1363 -- # nb=26476544 00:19:31.254 04:59:38 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:19:31.254 04:59:38 -- common/autotest_common.sh@1367 -- # echo 103424 00:19:31.254 04:59:38 -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:19:31.254 04:59:38 -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d bc4f335f-23e7-4c14-8ab5-99bf6e0c46ea --l2p_dram_limit 10' 00:19:31.254 04:59:38 -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:19:31.254 04:59:38 -- ftl/restore.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:19:31.254 04:59:38 -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:19:31.254 04:59:38 -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:19:31.254 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:19:31.254 04:59:38 -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d bc4f335f-23e7-4c14-8ab5-99bf6e0c46ea --l2p_dram_limit 10 -c nvc0n1p0 00:19:31.254 [2024-05-12 04:59:38.355225] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.254 [2024-05-12 04:59:38.355322] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:31.254 [2024-05-12 04:59:38.355376] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:31.254 [2024-05-12 04:59:38.355388] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.254 [2024-05-12 04:59:38.355479] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.254 [2024-05-12 04:59:38.355505] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:31.254 [2024-05-12 04:59:38.355533] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:19:31.254 [2024-05-12 04:59:38.355544] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.254 [2024-05-12 04:59:38.355574] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:31.254 [2024-05-12 04:59:38.356713] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:31.254 [2024-05-12 04:59:38.356755] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.254 [2024-05-12 04:59:38.356767] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:31.254 [2024-05-12 04:59:38.356781] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.185 ms 00:19:31.254 [2024-05-12 04:59:38.356790] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.254 [2024-05-12 04:59:38.356950] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 76b9a96f-c2ae-494a-8e67-433e8d3249a5 00:19:31.254 [2024-05-12 04:59:38.358039] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.254 [2024-05-12 04:59:38.358094] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:31.254 [2024-05-12 04:59:38.358109] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:19:31.254 [2024-05-12 04:59:38.358121] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.254 [2024-05-12 04:59:38.362691] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.254 [2024-05-12 04:59:38.362749] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:31.254 [2024-05-12 04:59:38.362766] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.507 ms 00:19:31.254 [2024-05-12 04:59:38.362778] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.254 [2024-05-12 04:59:38.362914] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.254 [2024-05-12 04:59:38.362935] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:31.254 [2024-05-12 04:59:38.362947] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:19:31.254 [2024-05-12 04:59:38.362962] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.254 [2024-05-12 04:59:38.363021] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.254 [2024-05-12 04:59:38.363041] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:31.254 [2024-05-12 04:59:38.363053] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:31.254 [2024-05-12 04:59:38.363065] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.254 [2024-05-12 04:59:38.363100] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:31.254 [2024-05-12 04:59:38.367094] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.254 [2024-05-12 04:59:38.367131] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:31.254 [2024-05-12 04:59:38.367163] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.004 ms 00:19:31.254 [2024-05-12 04:59:38.367174] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.254 [2024-05-12 04:59:38.367217] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.254 [2024-05-12 04:59:38.367260] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:31.254 [2024-05-12 04:59:38.367293] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:31.254 [2024-05-12 04:59:38.367303] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.255 [2024-05-12 04:59:38.367378] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:31.255 [2024-05-12 04:59:38.367499] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:31.255 [2024-05-12 04:59:38.367519] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:31.255 [2024-05-12 04:59:38.367533] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:31.255 [2024-05-12 04:59:38.367548] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:31.255 [2024-05-12 04:59:38.367560] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:31.255 [2024-05-12 04:59:38.367588] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:31.255 [2024-05-12 04:59:38.367617] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:31.255 [2024-05-12 04:59:38.367659] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:31.255 [2024-05-12 04:59:38.367813] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:31.255 [2024-05-12 04:59:38.367827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.255 [2024-05-12 04:59:38.367837] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:31.255 [2024-05-12 04:59:38.367863] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.452 ms 00:19:31.255 [2024-05-12 04:59:38.367873] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.255 [2024-05-12 04:59:38.367981] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.255 [2024-05-12 04:59:38.367999] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:31.255 [2024-05-12 04:59:38.368013] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:19:31.255 [2024-05-12 04:59:38.368024] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.255 [2024-05-12 04:59:38.368114] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:31.255 [2024-05-12 04:59:38.368132] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:31.255 [2024-05-12 04:59:38.368146] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:31.255 [2024-05-12 04:59:38.368159] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.255 [2024-05-12 04:59:38.368173] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:31.255 [2024-05-12 04:59:38.368183] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:31.255 [2024-05-12 04:59:38.368196] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:31.255 [2024-05-12 04:59:38.368206] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:31.255 [2024-05-12 04:59:38.368219] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:31.255 [2024-05-12 04:59:38.368258] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:31.255 [2024-05-12 04:59:38.368288] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:31.255 [2024-05-12 04:59:38.368299] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:31.255 [2024-05-12 04:59:38.368313] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:31.255 [2024-05-12 04:59:38.368323] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:31.255 [2024-05-12 04:59:38.368335] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:19:31.255 [2024-05-12 04:59:38.368344] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.255 [2024-05-12 04:59:38.368358] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:31.255 [2024-05-12 04:59:38.368368] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:19:31.255 [2024-05-12 04:59:38.368379] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.255 [2024-05-12 04:59:38.368389] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:31.255 [2024-05-12 04:59:38.368400] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:19:31.255 [2024-05-12 04:59:38.368410] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:31.255 [2024-05-12 04:59:38.368422] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:31.255 [2024-05-12 04:59:38.368432] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:31.255 [2024-05-12 04:59:38.368445] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:31.255 [2024-05-12 04:59:38.368455] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:31.255 [2024-05-12 04:59:38.368466] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:19:31.255 [2024-05-12 04:59:38.368476] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:31.255 [2024-05-12 04:59:38.368488] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:31.255 [2024-05-12 04:59:38.368497] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:31.255 [2024-05-12 04:59:38.368509] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:31.255 [2024-05-12 04:59:38.368518] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:31.255 [2024-05-12 04:59:38.368532] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:19:31.255 [2024-05-12 04:59:38.368541] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:31.255 [2024-05-12 04:59:38.368553] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:31.255 [2024-05-12 04:59:38.368563] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:31.255 [2024-05-12 04:59:38.368574] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:31.255 [2024-05-12 04:59:38.368583] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:31.255 [2024-05-12 04:59:38.368596] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:19:31.255 [2024-05-12 04:59:38.368620] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:31.255 [2024-05-12 04:59:38.368631] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:31.255 [2024-05-12 04:59:38.368642] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:31.255 [2024-05-12 04:59:38.368654] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:31.255 [2024-05-12 04:59:38.368665] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.255 [2024-05-12 04:59:38.368677] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:31.255 [2024-05-12 04:59:38.368687] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:31.255 [2024-05-12 04:59:38.368698] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:31.255 [2024-05-12 04:59:38.368707] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:31.255 [2024-05-12 04:59:38.368721] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:31.255 [2024-05-12 04:59:38.368731] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:31.255 [2024-05-12 04:59:38.368743] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:31.255 [2024-05-12 04:59:38.368756] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:31.255 [2024-05-12 04:59:38.368770] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:31.255 [2024-05-12 04:59:38.368781] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:19:31.255 [2024-05-12 04:59:38.368793] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:19:31.255 [2024-05-12 04:59:38.368804] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:19:31.255 [2024-05-12 04:59:38.368818] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:19:31.255 [2024-05-12 04:59:38.368828] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:19:31.255 [2024-05-12 04:59:38.368841] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:19:31.255 [2024-05-12 04:59:38.368851] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:19:31.255 [2024-05-12 04:59:38.368864] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:19:31.255 [2024-05-12 04:59:38.368874] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:19:31.255 [2024-05-12 04:59:38.368886] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:19:31.255 [2024-05-12 04:59:38.368897] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:19:31.255 [2024-05-12 04:59:38.368914] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:19:31.255 [2024-05-12 04:59:38.368924] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:31.255 [2024-05-12 04:59:38.368940] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:31.255 [2024-05-12 04:59:38.368952] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:31.255 [2024-05-12 04:59:38.368965] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:31.255 [2024-05-12 04:59:38.368975] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:31.255 [2024-05-12 04:59:38.368988] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:31.255 [2024-05-12 04:59:38.368999] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.255 [2024-05-12 04:59:38.369012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:31.255 [2024-05-12 04:59:38.369023] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.937 ms 00:19:31.255 [2024-05-12 04:59:38.369034] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.514 [2024-05-12 04:59:38.385828] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.514 [2024-05-12 04:59:38.385891] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:31.515 [2024-05-12 04:59:38.385909] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.743 ms 00:19:31.515 [2024-05-12 04:59:38.385921] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.515 [2024-05-12 04:59:38.386026] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.515 [2024-05-12 04:59:38.386044] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:31.515 [2024-05-12 04:59:38.386054] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:31.515 [2024-05-12 04:59:38.386065] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.515 [2024-05-12 04:59:38.418867] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.515 [2024-05-12 04:59:38.418918] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:31.515 [2024-05-12 04:59:38.418949] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.742 ms 00:19:31.515 [2024-05-12 04:59:38.418961] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.515 [2024-05-12 04:59:38.419006] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.515 [2024-05-12 04:59:38.419025] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:31.515 [2024-05-12 04:59:38.419036] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:31.515 [2024-05-12 04:59:38.419047] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.515 [2024-05-12 04:59:38.419455] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.515 [2024-05-12 04:59:38.419478] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:31.515 [2024-05-12 04:59:38.419493] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.345 ms 00:19:31.515 [2024-05-12 04:59:38.419505] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.515 [2024-05-12 04:59:38.419680] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.515 [2024-05-12 04:59:38.419708] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:31.515 [2024-05-12 04:59:38.419720] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:19:31.515 [2024-05-12 04:59:38.419732] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.515 [2024-05-12 04:59:38.435019] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.515 [2024-05-12 04:59:38.435213] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:31.515 [2024-05-12 04:59:38.435353] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.264 ms 00:19:31.515 [2024-05-12 04:59:38.435405] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.515 [2024-05-12 04:59:38.446830] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:31.515 [2024-05-12 04:59:38.449590] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.515 [2024-05-12 04:59:38.449746] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:31.515 [2024-05-12 04:59:38.449796] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.978 ms 00:19:31.515 [2024-05-12 04:59:38.449808] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.515 [2024-05-12 04:59:38.515167] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.515 [2024-05-12 04:59:38.515259] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:31.515 [2024-05-12 04:59:38.515302] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.320 ms 00:19:31.515 [2024-05-12 04:59:38.515312] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.515 [2024-05-12 04:59:38.515386] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:19:31.515 [2024-05-12 04:59:38.515406] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:19:34.099 [2024-05-12 04:59:40.787920] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.099 [2024-05-12 04:59:40.788020] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:34.099 [2024-05-12 04:59:40.788061] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2272.537 ms 00:19:34.099 [2024-05-12 04:59:40.788073] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.099 [2024-05-12 04:59:40.788349] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.099 [2024-05-12 04:59:40.788367] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:34.099 [2024-05-12 04:59:40.788392] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:19:34.099 [2024-05-12 04:59:40.788403] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.099 [2024-05-12 04:59:40.814042] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.099 [2024-05-12 04:59:40.814079] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:34.099 [2024-05-12 04:59:40.814113] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.578 ms 00:19:34.099 [2024-05-12 04:59:40.814123] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.099 [2024-05-12 04:59:40.839124] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.099 [2024-05-12 04:59:40.839159] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:34.099 [2024-05-12 04:59:40.839195] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.954 ms 00:19:34.099 [2024-05-12 04:59:40.839205] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.099 [2024-05-12 04:59:40.839662] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.099 [2024-05-12 04:59:40.839688] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:34.099 [2024-05-12 04:59:40.839704] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.382 ms 00:19:34.099 [2024-05-12 04:59:40.839715] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.099 [2024-05-12 04:59:40.909365] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.100 [2024-05-12 04:59:40.909405] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:34.100 [2024-05-12 04:59:40.909438] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.603 ms 00:19:34.100 [2024-05-12 04:59:40.909449] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.100 [2024-05-12 04:59:40.938436] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.100 [2024-05-12 04:59:40.938473] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:34.100 [2024-05-12 04:59:40.938507] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.940 ms 00:19:34.100 [2024-05-12 04:59:40.938520] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.100 [2024-05-12 04:59:40.940610] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.100 [2024-05-12 04:59:40.940645] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:34.100 [2024-05-12 04:59:40.940679] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.044 ms 00:19:34.100 [2024-05-12 04:59:40.940689] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.100 [2024-05-12 04:59:40.966898] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.100 [2024-05-12 04:59:40.966939] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:34.100 [2024-05-12 04:59:40.966974] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.148 ms 00:19:34.100 [2024-05-12 04:59:40.966986] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.100 [2024-05-12 04:59:40.967045] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.100 [2024-05-12 04:59:40.967064] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:34.100 [2024-05-12 04:59:40.967078] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:34.100 [2024-05-12 04:59:40.967088] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.100 [2024-05-12 04:59:40.967213] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.100 [2024-05-12 04:59:40.967279] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:34.100 [2024-05-12 04:59:40.967295] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:34.100 [2024-05-12 04:59:40.967307] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.100 [2024-05-12 04:59:40.968439] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2612.623 ms, result 0 00:19:34.100 { 00:19:34.100 "name": "ftl0", 00:19:34.100 "uuid": "76b9a96f-c2ae-494a-8e67-433e8d3249a5" 00:19:34.100 } 00:19:34.100 04:59:40 -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:19:34.100 04:59:40 -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:34.359 04:59:41 -- ftl/restore.sh@63 -- # echo ']}' 00:19:34.359 04:59:41 -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:34.619 [2024-05-12 04:59:41.499737] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.619 [2024-05-12 04:59:41.499811] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:34.619 [2024-05-12 04:59:41.499835] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:34.619 [2024-05-12 04:59:41.499847] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.619 [2024-05-12 04:59:41.499896] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:34.619 [2024-05-12 04:59:41.503680] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.619 [2024-05-12 04:59:41.503724] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:34.619 [2024-05-12 04:59:41.503761] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.758 ms 00:19:34.619 [2024-05-12 04:59:41.503772] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.619 [2024-05-12 04:59:41.504136] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.619 [2024-05-12 04:59:41.504157] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:34.619 [2024-05-12 04:59:41.504177] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:19:34.619 [2024-05-12 04:59:41.504188] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.619 [2024-05-12 04:59:41.507386] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.619 [2024-05-12 04:59:41.507414] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:34.619 [2024-05-12 04:59:41.507445] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.159 ms 00:19:34.619 [2024-05-12 04:59:41.507455] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.619 [2024-05-12 04:59:41.514249] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.619 [2024-05-12 04:59:41.514304] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:34.619 [2024-05-12 04:59:41.514353] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.765 ms 00:19:34.619 [2024-05-12 04:59:41.514365] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.619 [2024-05-12 04:59:41.542664] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.619 [2024-05-12 04:59:41.542699] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:34.619 [2024-05-12 04:59:41.542733] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.194 ms 00:19:34.619 [2024-05-12 04:59:41.542743] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.619 [2024-05-12 04:59:41.559565] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.619 [2024-05-12 04:59:41.559603] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:34.619 [2024-05-12 04:59:41.559636] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.775 ms 00:19:34.619 [2024-05-12 04:59:41.559646] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.619 [2024-05-12 04:59:41.559801] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.619 [2024-05-12 04:59:41.559820] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:34.619 [2024-05-12 04:59:41.559834] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:19:34.619 [2024-05-12 04:59:41.559844] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.619 [2024-05-12 04:59:41.585798] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.619 [2024-05-12 04:59:41.585833] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:34.619 [2024-05-12 04:59:41.585867] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.927 ms 00:19:34.619 [2024-05-12 04:59:41.585876] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.619 [2024-05-12 04:59:41.613245] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.619 [2024-05-12 04:59:41.613302] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:34.619 [2024-05-12 04:59:41.613336] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.322 ms 00:19:34.619 [2024-05-12 04:59:41.613345] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.619 [2024-05-12 04:59:41.639126] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.619 [2024-05-12 04:59:41.639160] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:34.619 [2024-05-12 04:59:41.639193] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.734 ms 00:19:34.619 [2024-05-12 04:59:41.639203] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.619 [2024-05-12 04:59:41.664239] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.619 [2024-05-12 04:59:41.664276] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:34.619 [2024-05-12 04:59:41.664310] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.892 ms 00:19:34.619 [2024-05-12 04:59:41.664320] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.619 [2024-05-12 04:59:41.664382] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:34.619 [2024-05-12 04:59:41.664402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:34.619 [2024-05-12 04:59:41.664417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:34.619 [2024-05-12 04:59:41.664427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:34.619 [2024-05-12 04:59:41.664438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:34.619 [2024-05-12 04:59:41.664448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:34.619 [2024-05-12 04:59:41.664459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:34.619 [2024-05-12 04:59:41.664468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:34.619 [2024-05-12 04:59:41.664480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:34.619 [2024-05-12 04:59:41.664505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:34.619 [2024-05-12 04:59:41.664548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:34.619 [2024-05-12 04:59:41.664558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:34.619 [2024-05-12 04:59:41.664571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:34.619 [2024-05-12 04:59:41.664581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:34.619 [2024-05-12 04:59:41.664594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:34.619 [2024-05-12 04:59:41.664605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:34.619 [2024-05-12 04:59:41.664619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:34.619 [2024-05-12 04:59:41.664629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:34.619 [2024-05-12 04:59:41.664641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:34.619 [2024-05-12 04:59:41.664652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:34.619 [2024-05-12 04:59:41.664666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:34.619 [2024-05-12 04:59:41.664677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:34.619 [2024-05-12 04:59:41.664689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:34.619 [2024-05-12 04:59:41.664699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.664712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.664723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.664735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.664745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.664757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.664768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.664780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.664791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.664806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.664824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.664836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.664847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.664860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.664870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.664882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.664893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.664905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.664915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.664927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.664938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.664949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.664960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.664972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.664982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.664998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.665009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.665022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.665032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.665044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.665055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.665067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.665077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.665089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.665100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.665111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.665122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.665134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.665145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.665157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.665168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.665183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.665194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.665207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.665217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.665242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.665253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.665281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.665294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.665308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.665319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.665331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.665342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.665354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.665365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.665377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.665387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.665401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.665412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.665425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.665436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.665449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.665459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.665471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.665481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.665494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.665504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.665516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.665527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.665539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.665549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.665561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.665572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.665586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.665596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.665610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.665621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.665633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:34.620 [2024-05-12 04:59:41.665652] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:34.620 [2024-05-12 04:59:41.665665] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 76b9a96f-c2ae-494a-8e67-433e8d3249a5 00:19:34.620 [2024-05-12 04:59:41.665678] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:34.620 [2024-05-12 04:59:41.665690] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:34.620 [2024-05-12 04:59:41.665699] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:34.620 [2024-05-12 04:59:41.665711] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:34.620 [2024-05-12 04:59:41.665721] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:34.620 [2024-05-12 04:59:41.665733] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:34.620 [2024-05-12 04:59:41.665743] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:34.620 [2024-05-12 04:59:41.665754] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:34.620 [2024-05-12 04:59:41.665763] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:34.620 [2024-05-12 04:59:41.665777] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.620 [2024-05-12 04:59:41.665787] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:34.620 [2024-05-12 04:59:41.665800] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.399 ms 00:19:34.620 [2024-05-12 04:59:41.665810] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.620 [2024-05-12 04:59:41.679960] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.621 [2024-05-12 04:59:41.679997] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:34.621 [2024-05-12 04:59:41.680031] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.104 ms 00:19:34.621 [2024-05-12 04:59:41.680041] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.621 [2024-05-12 04:59:41.680312] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.621 [2024-05-12 04:59:41.680345] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:34.621 [2024-05-12 04:59:41.680359] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.228 ms 00:19:34.621 [2024-05-12 04:59:41.680384] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.621 [2024-05-12 04:59:41.733745] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.621 [2024-05-12 04:59:41.733804] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:34.621 [2024-05-12 04:59:41.733854] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.621 [2024-05-12 04:59:41.733882] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.621 [2024-05-12 04:59:41.733954] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.621 [2024-05-12 04:59:41.733969] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:34.621 [2024-05-12 04:59:41.733981] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.621 [2024-05-12 04:59:41.733991] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.621 [2024-05-12 04:59:41.734110] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.621 [2024-05-12 04:59:41.734129] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:34.621 [2024-05-12 04:59:41.734142] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.621 [2024-05-12 04:59:41.734153] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.621 [2024-05-12 04:59:41.734179] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.621 [2024-05-12 04:59:41.734192] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:34.621 [2024-05-12 04:59:41.734219] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.621 [2024-05-12 04:59:41.734244] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.881 [2024-05-12 04:59:41.824542] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.881 [2024-05-12 04:59:41.824599] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:34.881 [2024-05-12 04:59:41.824634] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.881 [2024-05-12 04:59:41.824644] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.881 [2024-05-12 04:59:41.857876] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.881 [2024-05-12 04:59:41.857912] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:34.881 [2024-05-12 04:59:41.857946] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.881 [2024-05-12 04:59:41.857957] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.881 [2024-05-12 04:59:41.858046] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.881 [2024-05-12 04:59:41.858064] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:34.881 [2024-05-12 04:59:41.858076] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.881 [2024-05-12 04:59:41.858085] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.881 [2024-05-12 04:59:41.858144] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.881 [2024-05-12 04:59:41.858161] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:34.881 [2024-05-12 04:59:41.858173] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.881 [2024-05-12 04:59:41.858183] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.881 [2024-05-12 04:59:41.858331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.881 [2024-05-12 04:59:41.858353] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:34.881 [2024-05-12 04:59:41.858367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.881 [2024-05-12 04:59:41.858377] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.881 [2024-05-12 04:59:41.858432] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.881 [2024-05-12 04:59:41.858448] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:34.881 [2024-05-12 04:59:41.858461] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.881 [2024-05-12 04:59:41.858470] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.881 [2024-05-12 04:59:41.858516] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.881 [2024-05-12 04:59:41.858547] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:34.881 [2024-05-12 04:59:41.858574] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.881 [2024-05-12 04:59:41.858601] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.881 [2024-05-12 04:59:41.858659] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.881 [2024-05-12 04:59:41.858682] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:34.881 [2024-05-12 04:59:41.858697] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.881 [2024-05-12 04:59:41.858707] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.881 [2024-05-12 04:59:41.858859] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 359.076 ms, result 0 00:19:34.881 true 00:19:34.881 04:59:41 -- ftl/restore.sh@66 -- # killprocess 74094 00:19:34.881 04:59:41 -- common/autotest_common.sh@926 -- # '[' -z 74094 ']' 00:19:34.881 04:59:41 -- common/autotest_common.sh@930 -- # kill -0 74094 00:19:34.881 04:59:41 -- common/autotest_common.sh@931 -- # uname 00:19:34.881 04:59:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:34.881 04:59:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 74094 00:19:34.881 killing process with pid 74094 00:19:34.881 04:59:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:34.881 04:59:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:34.881 04:59:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 74094' 00:19:34.881 04:59:41 -- common/autotest_common.sh@945 -- # kill 74094 00:19:34.881 04:59:41 -- common/autotest_common.sh@950 -- # wait 74094 00:19:39.068 04:59:45 -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:19:43.257 262144+0 records in 00:19:43.257 262144+0 records out 00:19:43.257 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.15334 s, 259 MB/s 00:19:43.257 04:59:50 -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:19:45.158 04:59:51 -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:45.158 [2024-05-12 04:59:52.068391] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:45.158 [2024-05-12 04:59:52.068525] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74349 ] 00:19:45.158 [2024-05-12 04:59:52.242854] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.416 [2024-05-12 04:59:52.444645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:45.674 [2024-05-12 04:59:52.719496] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:45.674 [2024-05-12 04:59:52.719586] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:45.932 [2024-05-12 04:59:52.869123] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.932 [2024-05-12 04:59:52.869174] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:45.932 [2024-05-12 04:59:52.869207] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:45.933 [2024-05-12 04:59:52.869217] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.933 [2024-05-12 04:59:52.869328] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.933 [2024-05-12 04:59:52.869359] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:45.933 [2024-05-12 04:59:52.869375] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:45.933 [2024-05-12 04:59:52.869385] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.933 [2024-05-12 04:59:52.869414] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:45.933 [2024-05-12 04:59:52.870283] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:45.933 [2024-05-12 04:59:52.870321] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.933 [2024-05-12 04:59:52.870335] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:45.933 [2024-05-12 04:59:52.870346] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.914 ms 00:19:45.933 [2024-05-12 04:59:52.870356] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.933 [2024-05-12 04:59:52.871467] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:45.933 [2024-05-12 04:59:52.884914] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.933 [2024-05-12 04:59:52.884953] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:45.933 [2024-05-12 04:59:52.884989] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.450 ms 00:19:45.933 [2024-05-12 04:59:52.884999] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.933 [2024-05-12 04:59:52.885058] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.933 [2024-05-12 04:59:52.885075] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:45.933 [2024-05-12 04:59:52.885086] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:19:45.933 [2024-05-12 04:59:52.885095] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.933 [2024-05-12 04:59:52.889807] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.933 [2024-05-12 04:59:52.889845] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:45.933 [2024-05-12 04:59:52.889876] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.633 ms 00:19:45.933 [2024-05-12 04:59:52.889886] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.933 [2024-05-12 04:59:52.889983] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.933 [2024-05-12 04:59:52.890001] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:45.933 [2024-05-12 04:59:52.890013] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:19:45.933 [2024-05-12 04:59:52.890022] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.933 [2024-05-12 04:59:52.890092] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.933 [2024-05-12 04:59:52.890112] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:45.933 [2024-05-12 04:59:52.890124] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:45.933 [2024-05-12 04:59:52.890133] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.933 [2024-05-12 04:59:52.890169] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:45.933 [2024-05-12 04:59:52.894416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.933 [2024-05-12 04:59:52.894453] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:45.933 [2024-05-12 04:59:52.894484] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.259 ms 00:19:45.933 [2024-05-12 04:59:52.894494] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.933 [2024-05-12 04:59:52.894550] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.933 [2024-05-12 04:59:52.894565] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:45.933 [2024-05-12 04:59:52.894577] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:45.933 [2024-05-12 04:59:52.894601] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.933 [2024-05-12 04:59:52.894654] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:45.933 [2024-05-12 04:59:52.894682] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:19:45.933 [2024-05-12 04:59:52.894719] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:45.933 [2024-05-12 04:59:52.894736] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:19:45.933 [2024-05-12 04:59:52.894806] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:45.933 [2024-05-12 04:59:52.894820] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:45.933 [2024-05-12 04:59:52.894833] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:45.933 [2024-05-12 04:59:52.894845] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:45.933 [2024-05-12 04:59:52.894857] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:45.933 [2024-05-12 04:59:52.894872] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:45.933 [2024-05-12 04:59:52.894881] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:45.933 [2024-05-12 04:59:52.894890] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:45.933 [2024-05-12 04:59:52.894915] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:45.933 [2024-05-12 04:59:52.894925] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.933 [2024-05-12 04:59:52.894935] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:45.933 [2024-05-12 04:59:52.894946] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:19:45.933 [2024-05-12 04:59:52.894955] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.933 [2024-05-12 04:59:52.895026] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.933 [2024-05-12 04:59:52.895042] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:45.933 [2024-05-12 04:59:52.895057] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:19:45.933 [2024-05-12 04:59:52.895066] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.933 [2024-05-12 04:59:52.895160] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:45.933 [2024-05-12 04:59:52.895177] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:45.933 [2024-05-12 04:59:52.895187] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:45.933 [2024-05-12 04:59:52.895197] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:45.933 [2024-05-12 04:59:52.895208] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:45.933 [2024-05-12 04:59:52.895217] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:45.933 [2024-05-12 04:59:52.895241] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:45.933 [2024-05-12 04:59:52.895252] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:45.933 [2024-05-12 04:59:52.895262] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:45.933 [2024-05-12 04:59:52.895272] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:45.933 [2024-05-12 04:59:52.895281] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:45.933 [2024-05-12 04:59:52.895291] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:45.933 [2024-05-12 04:59:52.895320] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:45.933 [2024-05-12 04:59:52.895350] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:45.933 [2024-05-12 04:59:52.895360] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:19:45.933 [2024-05-12 04:59:52.895370] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:45.933 [2024-05-12 04:59:52.895380] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:45.933 [2024-05-12 04:59:52.895389] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:19:45.933 [2024-05-12 04:59:52.895398] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:45.933 [2024-05-12 04:59:52.895407] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:45.933 [2024-05-12 04:59:52.895417] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:19:45.933 [2024-05-12 04:59:52.895448] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:45.933 [2024-05-12 04:59:52.895458] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:45.933 [2024-05-12 04:59:52.895468] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:45.933 [2024-05-12 04:59:52.895477] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:45.933 [2024-05-12 04:59:52.895487] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:45.933 [2024-05-12 04:59:52.895496] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:19:45.933 [2024-05-12 04:59:52.895506] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:45.933 [2024-05-12 04:59:52.895515] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:45.933 [2024-05-12 04:59:52.895524] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:45.933 [2024-05-12 04:59:52.895534] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:45.933 [2024-05-12 04:59:52.895543] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:45.933 [2024-05-12 04:59:52.895552] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:19:45.933 [2024-05-12 04:59:52.895577] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:45.933 [2024-05-12 04:59:52.895601] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:45.933 [2024-05-12 04:59:52.895611] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:45.933 [2024-05-12 04:59:52.895620] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:45.933 [2024-05-12 04:59:52.895630] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:45.933 [2024-05-12 04:59:52.895639] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:19:45.933 [2024-05-12 04:59:52.895648] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:45.933 [2024-05-12 04:59:52.895657] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:45.933 [2024-05-12 04:59:52.895667] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:45.933 [2024-05-12 04:59:52.895677] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:45.933 [2024-05-12 04:59:52.895701] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:45.933 [2024-05-12 04:59:52.895718] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:45.933 [2024-05-12 04:59:52.895728] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:45.934 [2024-05-12 04:59:52.895737] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:45.934 [2024-05-12 04:59:52.895747] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:45.934 [2024-05-12 04:59:52.895756] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:45.934 [2024-05-12 04:59:52.895765] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:45.934 [2024-05-12 04:59:52.895776] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:45.934 [2024-05-12 04:59:52.895788] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:45.934 [2024-05-12 04:59:52.895800] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:45.934 [2024-05-12 04:59:52.895810] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:19:45.934 [2024-05-12 04:59:52.895820] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:19:45.934 [2024-05-12 04:59:52.895830] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:19:45.934 [2024-05-12 04:59:52.895840] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:19:45.934 [2024-05-12 04:59:52.895850] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:19:45.934 [2024-05-12 04:59:52.895860] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:19:45.934 [2024-05-12 04:59:52.895871] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:19:45.934 [2024-05-12 04:59:52.895881] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:19:45.934 [2024-05-12 04:59:52.895891] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:19:45.934 [2024-05-12 04:59:52.895901] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:19:45.934 [2024-05-12 04:59:52.895911] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:19:45.934 [2024-05-12 04:59:52.895921] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:19:45.934 [2024-05-12 04:59:52.895958] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:45.934 [2024-05-12 04:59:52.895971] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:45.934 [2024-05-12 04:59:52.895984] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:45.934 [2024-05-12 04:59:52.895995] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:45.934 [2024-05-12 04:59:52.896006] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:45.934 [2024-05-12 04:59:52.896017] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:45.934 [2024-05-12 04:59:52.896029] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.934 [2024-05-12 04:59:52.896041] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:45.934 [2024-05-12 04:59:52.896052] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.909 ms 00:19:45.934 [2024-05-12 04:59:52.896063] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.934 [2024-05-12 04:59:52.913964] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.934 [2024-05-12 04:59:52.914004] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:45.934 [2024-05-12 04:59:52.914037] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.848 ms 00:19:45.934 [2024-05-12 04:59:52.914047] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.934 [2024-05-12 04:59:52.914137] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.934 [2024-05-12 04:59:52.914151] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:45.934 [2024-05-12 04:59:52.914167] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:19:45.934 [2024-05-12 04:59:52.914177] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.934 [2024-05-12 04:59:52.964566] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.934 [2024-05-12 04:59:52.964610] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:45.934 [2024-05-12 04:59:52.964643] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.260 ms 00:19:45.934 [2024-05-12 04:59:52.964653] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.934 [2024-05-12 04:59:52.964709] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.934 [2024-05-12 04:59:52.964725] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:45.934 [2024-05-12 04:59:52.964736] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:45.934 [2024-05-12 04:59:52.964745] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.934 [2024-05-12 04:59:52.965110] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.934 [2024-05-12 04:59:52.965153] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:45.934 [2024-05-12 04:59:52.965166] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:19:45.934 [2024-05-12 04:59:52.965176] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.934 [2024-05-12 04:59:52.965346] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.934 [2024-05-12 04:59:52.965371] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:45.934 [2024-05-12 04:59:52.965383] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:19:45.934 [2024-05-12 04:59:52.965393] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.934 [2024-05-12 04:59:52.979407] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.934 [2024-05-12 04:59:52.979442] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:45.934 [2024-05-12 04:59:52.979472] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.986 ms 00:19:45.934 [2024-05-12 04:59:52.979482] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.934 [2024-05-12 04:59:52.993255] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:45.934 [2024-05-12 04:59:52.993292] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:45.934 [2024-05-12 04:59:52.993323] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.934 [2024-05-12 04:59:52.993333] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:45.934 [2024-05-12 04:59:52.993344] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.734 ms 00:19:45.934 [2024-05-12 04:59:52.993353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.934 [2024-05-12 04:59:53.017399] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.934 [2024-05-12 04:59:53.017436] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:45.934 [2024-05-12 04:59:53.017466] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.005 ms 00:19:45.934 [2024-05-12 04:59:53.017476] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.934 [2024-05-12 04:59:53.030733] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.934 [2024-05-12 04:59:53.030769] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:45.934 [2024-05-12 04:59:53.030799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.216 ms 00:19:45.934 [2024-05-12 04:59:53.030808] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.934 [2024-05-12 04:59:53.043891] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.934 [2024-05-12 04:59:53.043947] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:45.934 [2024-05-12 04:59:53.043978] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.034 ms 00:19:45.934 [2024-05-12 04:59:53.043988] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.934 [2024-05-12 04:59:53.044460] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.934 [2024-05-12 04:59:53.044493] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:45.934 [2024-05-12 04:59:53.044506] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.371 ms 00:19:45.934 [2024-05-12 04:59:53.044516] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.194 [2024-05-12 04:59:53.106797] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.194 [2024-05-12 04:59:53.106864] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:46.194 [2024-05-12 04:59:53.106896] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.259 ms 00:19:46.194 [2024-05-12 04:59:53.106906] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.194 [2024-05-12 04:59:53.117468] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:46.194 [2024-05-12 04:59:53.119480] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.194 [2024-05-12 04:59:53.119507] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:46.194 [2024-05-12 04:59:53.119537] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.522 ms 00:19:46.194 [2024-05-12 04:59:53.119546] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.194 [2024-05-12 04:59:53.119624] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.194 [2024-05-12 04:59:53.119641] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:46.194 [2024-05-12 04:59:53.119656] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:46.194 [2024-05-12 04:59:53.119665] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.194 [2024-05-12 04:59:53.119735] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.194 [2024-05-12 04:59:53.119752] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:46.194 [2024-05-12 04:59:53.119762] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:19:46.194 [2024-05-12 04:59:53.119771] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.194 [2024-05-12 04:59:53.121563] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.194 [2024-05-12 04:59:53.121595] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:46.194 [2024-05-12 04:59:53.121622] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.773 ms 00:19:46.194 [2024-05-12 04:59:53.121637] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.194 [2024-05-12 04:59:53.121668] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.194 [2024-05-12 04:59:53.121682] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:46.194 [2024-05-12 04:59:53.121692] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:46.194 [2024-05-12 04:59:53.121701] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.194 [2024-05-12 04:59:53.121754] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:46.194 [2024-05-12 04:59:53.121771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.194 [2024-05-12 04:59:53.121781] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:46.194 [2024-05-12 04:59:53.121791] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:19:46.194 [2024-05-12 04:59:53.121799] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.194 [2024-05-12 04:59:53.147497] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.194 [2024-05-12 04:59:53.147535] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:46.194 [2024-05-12 04:59:53.147566] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.674 ms 00:19:46.194 [2024-05-12 04:59:53.147576] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.194 [2024-05-12 04:59:53.147646] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.194 [2024-05-12 04:59:53.147662] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:46.194 [2024-05-12 04:59:53.147673] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:46.194 [2024-05-12 04:59:53.147689] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.194 [2024-05-12 04:59:53.148857] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 279.212 ms, result 0 00:20:30.087  Copying: 23/1024 [MB] (23 MBps) Copying: 46/1024 [MB] (23 MBps) Copying: 70/1024 [MB] (23 MBps) Copying: 93/1024 [MB] (23 MBps) Copying: 117/1024 [MB] (23 MBps) Copying: 141/1024 [MB] (23 MBps) Copying: 165/1024 [MB] (23 MBps) Copying: 189/1024 [MB] (24 MBps) Copying: 213/1024 [MB] (24 MBps) Copying: 236/1024 [MB] (23 MBps) Copying: 260/1024 [MB] (23 MBps) Copying: 283/1024 [MB] (23 MBps) Copying: 306/1024 [MB] (23 MBps) Copying: 331/1024 [MB] (24 MBps) Copying: 354/1024 [MB] (23 MBps) Copying: 377/1024 [MB] (23 MBps) Copying: 400/1024 [MB] (22 MBps) Copying: 423/1024 [MB] (22 MBps) Copying: 446/1024 [MB] (22 MBps) Copying: 470/1024 [MB] (24 MBps) Copying: 493/1024 [MB] (23 MBps) Copying: 516/1024 [MB] (23 MBps) Copying: 540/1024 [MB] (23 MBps) Copying: 563/1024 [MB] (22 MBps) Copying: 585/1024 [MB] (22 MBps) Copying: 608/1024 [MB] (23 MBps) Copying: 632/1024 [MB] (23 MBps) Copying: 655/1024 [MB] (23 MBps) Copying: 679/1024 [MB] (23 MBps) Copying: 702/1024 [MB] (23 MBps) Copying: 726/1024 [MB] (23 MBps) Copying: 749/1024 [MB] (22 MBps) Copying: 772/1024 [MB] (22 MBps) Copying: 794/1024 [MB] (22 MBps) Copying: 817/1024 [MB] (22 MBps) Copying: 840/1024 [MB] (22 MBps) Copying: 862/1024 [MB] (22 MBps) Copying: 885/1024 [MB] (23 MBps) Copying: 908/1024 [MB] (22 MBps) Copying: 931/1024 [MB] (23 MBps) Copying: 953/1024 [MB] (22 MBps) Copying: 976/1024 [MB] (23 MBps) Copying: 1000/1024 [MB] (23 MBps) Copying: 1023/1024 [MB] (23 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-05-12 05:00:37.202544] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.087 [2024-05-12 05:00:37.202616] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:30.087 [2024-05-12 05:00:37.202668] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:30.087 [2024-05-12 05:00:37.202678] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.087 [2024-05-12 05:00:37.202705] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:30.087 [2024-05-12 05:00:37.205723] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.087 [2024-05-12 05:00:37.205755] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:30.087 [2024-05-12 05:00:37.205786] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.997 ms 00:20:30.087 [2024-05-12 05:00:37.205795] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.087 [2024-05-12 05:00:37.208656] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.087 [2024-05-12 05:00:37.208755] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:30.087 [2024-05-12 05:00:37.208786] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.826 ms 00:20:30.087 [2024-05-12 05:00:37.208796] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.346 [2024-05-12 05:00:37.224039] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.346 [2024-05-12 05:00:37.224083] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:30.346 [2024-05-12 05:00:37.224102] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.223 ms 00:20:30.346 [2024-05-12 05:00:37.224114] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.346 [2024-05-12 05:00:37.229973] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.346 [2024-05-12 05:00:37.230009] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:20:30.346 [2024-05-12 05:00:37.230037] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.820 ms 00:20:30.346 [2024-05-12 05:00:37.230046] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.346 [2024-05-12 05:00:37.258746] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.346 [2024-05-12 05:00:37.258785] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:30.346 [2024-05-12 05:00:37.258815] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.636 ms 00:20:30.346 [2024-05-12 05:00:37.258825] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.346 [2024-05-12 05:00:37.276941] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.346 [2024-05-12 05:00:37.276979] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:30.346 [2024-05-12 05:00:37.277009] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.077 ms 00:20:30.346 [2024-05-12 05:00:37.277019] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.346 [2024-05-12 05:00:37.277179] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.346 [2024-05-12 05:00:37.277198] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:30.346 [2024-05-12 05:00:37.277267] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:20:30.346 [2024-05-12 05:00:37.277281] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.346 [2024-05-12 05:00:37.305252] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.346 [2024-05-12 05:00:37.305301] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:30.346 [2024-05-12 05:00:37.305340] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.951 ms 00:20:30.346 [2024-05-12 05:00:37.305366] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.346 [2024-05-12 05:00:37.332875] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.346 [2024-05-12 05:00:37.332913] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:30.346 [2024-05-12 05:00:37.332943] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.469 ms 00:20:30.346 [2024-05-12 05:00:37.332952] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.346 [2024-05-12 05:00:37.359341] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.346 [2024-05-12 05:00:37.359378] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:30.346 [2024-05-12 05:00:37.359407] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.352 ms 00:20:30.346 [2024-05-12 05:00:37.359417] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.346 [2024-05-12 05:00:37.385819] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.346 [2024-05-12 05:00:37.385859] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:30.347 [2024-05-12 05:00:37.385891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.312 ms 00:20:30.347 [2024-05-12 05:00:37.385916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.347 [2024-05-12 05:00:37.385970] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:30.347 [2024-05-12 05:00:37.385992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.386992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.387002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.387013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:30.347 [2024-05-12 05:00:37.387023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:30.348 [2024-05-12 05:00:37.387033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:30.348 [2024-05-12 05:00:37.387043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:30.348 [2024-05-12 05:00:37.387053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:30.348 [2024-05-12 05:00:37.387063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:30.348 [2024-05-12 05:00:37.387073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:30.348 [2024-05-12 05:00:37.387084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:30.348 [2024-05-12 05:00:37.387095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:30.348 [2024-05-12 05:00:37.387105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:30.348 [2024-05-12 05:00:37.387115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:30.348 [2024-05-12 05:00:37.387125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:30.348 [2024-05-12 05:00:37.387144] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:30.348 [2024-05-12 05:00:37.387154] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 76b9a96f-c2ae-494a-8e67-433e8d3249a5 00:20:30.348 [2024-05-12 05:00:37.387164] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:30.348 [2024-05-12 05:00:37.387180] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:30.348 [2024-05-12 05:00:37.387190] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:30.348 [2024-05-12 05:00:37.387200] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:30.348 [2024-05-12 05:00:37.387210] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:30.348 [2024-05-12 05:00:37.387219] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:30.348 [2024-05-12 05:00:37.387229] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:30.348 [2024-05-12 05:00:37.387237] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:30.348 [2024-05-12 05:00:37.387246] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:30.348 [2024-05-12 05:00:37.387256] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.348 [2024-05-12 05:00:37.387266] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:30.348 [2024-05-12 05:00:37.387287] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.302 ms 00:20:30.348 [2024-05-12 05:00:37.387299] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.348 [2024-05-12 05:00:37.402058] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.348 [2024-05-12 05:00:37.402093] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:30.348 [2024-05-12 05:00:37.402136] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.697 ms 00:20:30.348 [2024-05-12 05:00:37.402146] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.348 [2024-05-12 05:00:37.402384] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.348 [2024-05-12 05:00:37.402405] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:30.348 [2024-05-12 05:00:37.402417] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:20:30.348 [2024-05-12 05:00:37.402427] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.348 [2024-05-12 05:00:37.441116] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.348 [2024-05-12 05:00:37.441159] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:30.348 [2024-05-12 05:00:37.441189] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.348 [2024-05-12 05:00:37.441199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.348 [2024-05-12 05:00:37.441270] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.348 [2024-05-12 05:00:37.441285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:30.348 [2024-05-12 05:00:37.441312] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.348 [2024-05-12 05:00:37.441321] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.348 [2024-05-12 05:00:37.441441] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.348 [2024-05-12 05:00:37.441460] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:30.348 [2024-05-12 05:00:37.441471] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.348 [2024-05-12 05:00:37.441480] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.348 [2024-05-12 05:00:37.441503] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.348 [2024-05-12 05:00:37.441515] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:30.348 [2024-05-12 05:00:37.441525] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.348 [2024-05-12 05:00:37.441550] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.607 [2024-05-12 05:00:37.520735] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.607 [2024-05-12 05:00:37.520797] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:30.607 [2024-05-12 05:00:37.520830] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.607 [2024-05-12 05:00:37.520839] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.607 [2024-05-12 05:00:37.551853] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.607 [2024-05-12 05:00:37.551890] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:30.607 [2024-05-12 05:00:37.551921] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.607 [2024-05-12 05:00:37.551930] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.607 [2024-05-12 05:00:37.552035] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.607 [2024-05-12 05:00:37.552059] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:30.607 [2024-05-12 05:00:37.552071] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.607 [2024-05-12 05:00:37.552081] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.607 [2024-05-12 05:00:37.552130] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.607 [2024-05-12 05:00:37.552145] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:30.607 [2024-05-12 05:00:37.552155] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.607 [2024-05-12 05:00:37.552164] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.607 [2024-05-12 05:00:37.552355] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.607 [2024-05-12 05:00:37.552374] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:30.607 [2024-05-12 05:00:37.552392] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.607 [2024-05-12 05:00:37.552402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.607 [2024-05-12 05:00:37.552448] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.607 [2024-05-12 05:00:37.552464] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:30.607 [2024-05-12 05:00:37.552475] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.607 [2024-05-12 05:00:37.552485] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.607 [2024-05-12 05:00:37.552524] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.607 [2024-05-12 05:00:37.552537] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:30.607 [2024-05-12 05:00:37.552553] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.607 [2024-05-12 05:00:37.552562] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.607 [2024-05-12 05:00:37.552639] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.607 [2024-05-12 05:00:37.552671] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:30.607 [2024-05-12 05:00:37.552697] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.607 [2024-05-12 05:00:37.552706] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.607 [2024-05-12 05:00:37.552835] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 350.259 ms, result 0 00:20:31.548 00:20:31.548 00:20:31.548 05:00:38 -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:20:31.548 [2024-05-12 05:00:38.641726] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:31.548 [2024-05-12 05:00:38.641895] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74818 ] 00:20:31.808 [2024-05-12 05:00:38.810438] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.067 [2024-05-12 05:00:38.963882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:32.325 [2024-05-12 05:00:39.225743] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:32.325 [2024-05-12 05:00:39.225819] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:32.325 [2024-05-12 05:00:39.375660] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.325 [2024-05-12 05:00:39.375713] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:32.325 [2024-05-12 05:00:39.375747] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:32.325 [2024-05-12 05:00:39.375759] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.325 [2024-05-12 05:00:39.375820] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.325 [2024-05-12 05:00:39.375838] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:32.325 [2024-05-12 05:00:39.375849] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:20:32.325 [2024-05-12 05:00:39.375859] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.325 [2024-05-12 05:00:39.375885] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:32.325 [2024-05-12 05:00:39.376857] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:32.325 [2024-05-12 05:00:39.376898] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.325 [2024-05-12 05:00:39.376912] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:32.325 [2024-05-12 05:00:39.376924] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.018 ms 00:20:32.325 [2024-05-12 05:00:39.376935] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.325 [2024-05-12 05:00:39.378085] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:32.325 [2024-05-12 05:00:39.392571] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.325 [2024-05-12 05:00:39.392611] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:32.325 [2024-05-12 05:00:39.392649] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.488 ms 00:20:32.325 [2024-05-12 05:00:39.392660] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.325 [2024-05-12 05:00:39.392724] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.325 [2024-05-12 05:00:39.392743] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:32.325 [2024-05-12 05:00:39.392754] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:20:32.325 [2024-05-12 05:00:39.392765] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.325 [2024-05-12 05:00:39.397668] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.325 [2024-05-12 05:00:39.397704] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:32.325 [2024-05-12 05:00:39.397734] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.778 ms 00:20:32.325 [2024-05-12 05:00:39.397745] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.325 [2024-05-12 05:00:39.397868] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.325 [2024-05-12 05:00:39.397887] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:32.325 [2024-05-12 05:00:39.397915] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:20:32.325 [2024-05-12 05:00:39.397926] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.325 [2024-05-12 05:00:39.397976] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.325 [2024-05-12 05:00:39.397998] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:32.325 [2024-05-12 05:00:39.398011] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:32.325 [2024-05-12 05:00:39.398021] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.325 [2024-05-12 05:00:39.398056] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:32.325 [2024-05-12 05:00:39.401994] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.325 [2024-05-12 05:00:39.402027] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:32.325 [2024-05-12 05:00:39.402057] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.951 ms 00:20:32.325 [2024-05-12 05:00:39.402067] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.325 [2024-05-12 05:00:39.402104] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.325 [2024-05-12 05:00:39.402119] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:32.325 [2024-05-12 05:00:39.402130] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:32.325 [2024-05-12 05:00:39.402140] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.325 [2024-05-12 05:00:39.402178] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:32.325 [2024-05-12 05:00:39.402207] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:20:32.325 [2024-05-12 05:00:39.402278] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:32.325 [2024-05-12 05:00:39.402301] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:20:32.325 [2024-05-12 05:00:39.402386] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:20:32.325 [2024-05-12 05:00:39.402401] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:32.325 [2024-05-12 05:00:39.402415] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:20:32.325 [2024-05-12 05:00:39.402428] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:32.325 [2024-05-12 05:00:39.402441] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:32.325 [2024-05-12 05:00:39.402457] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:32.325 [2024-05-12 05:00:39.402467] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:32.325 [2024-05-12 05:00:39.402477] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:20:32.325 [2024-05-12 05:00:39.402487] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:20:32.325 [2024-05-12 05:00:39.402497] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.325 [2024-05-12 05:00:39.402508] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:32.325 [2024-05-12 05:00:39.402519] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:20:32.325 [2024-05-12 05:00:39.402528] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.325 [2024-05-12 05:00:39.402607] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.325 [2024-05-12 05:00:39.402621] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:32.325 [2024-05-12 05:00:39.402636] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:20:32.325 [2024-05-12 05:00:39.402662] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.325 [2024-05-12 05:00:39.402774] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:32.325 [2024-05-12 05:00:39.402798] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:32.325 [2024-05-12 05:00:39.402810] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:32.325 [2024-05-12 05:00:39.402822] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:32.325 [2024-05-12 05:00:39.402833] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:32.325 [2024-05-12 05:00:39.402842] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:32.325 [2024-05-12 05:00:39.402853] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:32.325 [2024-05-12 05:00:39.402863] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:32.325 [2024-05-12 05:00:39.402873] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:32.325 [2024-05-12 05:00:39.402883] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:32.325 [2024-05-12 05:00:39.402893] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:32.325 [2024-05-12 05:00:39.402903] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:32.325 [2024-05-12 05:00:39.402913] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:32.325 [2024-05-12 05:00:39.402925] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:32.325 [2024-05-12 05:00:39.402935] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:20:32.325 [2024-05-12 05:00:39.402946] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:32.325 [2024-05-12 05:00:39.402956] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:32.325 [2024-05-12 05:00:39.402966] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:20:32.325 [2024-05-12 05:00:39.402975] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:32.325 [2024-05-12 05:00:39.402985] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:20:32.325 [2024-05-12 05:00:39.402995] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:20:32.325 [2024-05-12 05:00:39.403019] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:20:32.325 [2024-05-12 05:00:39.403030] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:32.325 [2024-05-12 05:00:39.403039] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:32.325 [2024-05-12 05:00:39.403049] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:32.325 [2024-05-12 05:00:39.403073] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:32.325 [2024-05-12 05:00:39.403083] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:20:32.325 [2024-05-12 05:00:39.403093] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:32.325 [2024-05-12 05:00:39.403103] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:32.325 [2024-05-12 05:00:39.403112] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:32.325 [2024-05-12 05:00:39.403121] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:32.325 [2024-05-12 05:00:39.403131] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:32.325 [2024-05-12 05:00:39.403140] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:20:32.325 [2024-05-12 05:00:39.403149] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:32.325 [2024-05-12 05:00:39.403159] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:32.325 [2024-05-12 05:00:39.403169] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:32.325 [2024-05-12 05:00:39.403178] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:32.326 [2024-05-12 05:00:39.403189] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:32.326 [2024-05-12 05:00:39.403198] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:20:32.326 [2024-05-12 05:00:39.403208] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:32.326 [2024-05-12 05:00:39.403217] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:32.326 [2024-05-12 05:00:39.403228] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:32.326 [2024-05-12 05:00:39.403238] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:32.326 [2024-05-12 05:00:39.403249] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:32.326 [2024-05-12 05:00:39.403264] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:32.326 [2024-05-12 05:00:39.403275] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:32.326 [2024-05-12 05:00:39.403284] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:32.326 [2024-05-12 05:00:39.403294] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:32.326 [2024-05-12 05:00:39.403322] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:32.326 [2024-05-12 05:00:39.403353] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:32.326 [2024-05-12 05:00:39.403365] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:32.326 [2024-05-12 05:00:39.403378] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:32.326 [2024-05-12 05:00:39.403390] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:32.326 [2024-05-12 05:00:39.403400] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:20:32.326 [2024-05-12 05:00:39.403411] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:20:32.326 [2024-05-12 05:00:39.403437] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:20:32.326 [2024-05-12 05:00:39.403449] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:20:32.326 [2024-05-12 05:00:39.403460] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:20:32.326 [2024-05-12 05:00:39.403471] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:20:32.326 [2024-05-12 05:00:39.403482] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:20:32.326 [2024-05-12 05:00:39.403493] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:20:32.326 [2024-05-12 05:00:39.403505] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:20:32.326 [2024-05-12 05:00:39.403535] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:20:32.326 [2024-05-12 05:00:39.403546] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:20:32.326 [2024-05-12 05:00:39.403571] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:20:32.326 [2024-05-12 05:00:39.403581] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:32.326 [2024-05-12 05:00:39.403592] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:32.326 [2024-05-12 05:00:39.403604] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:32.326 [2024-05-12 05:00:39.403615] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:32.326 [2024-05-12 05:00:39.403626] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:32.326 [2024-05-12 05:00:39.403637] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:32.326 [2024-05-12 05:00:39.403648] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.326 [2024-05-12 05:00:39.403659] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:32.326 [2024-05-12 05:00:39.403669] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.914 ms 00:20:32.326 [2024-05-12 05:00:39.403680] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.326 [2024-05-12 05:00:39.419185] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.326 [2024-05-12 05:00:39.419254] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:32.326 [2024-05-12 05:00:39.419288] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.425 ms 00:20:32.326 [2024-05-12 05:00:39.419299] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.326 [2024-05-12 05:00:39.419400] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.326 [2024-05-12 05:00:39.419416] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:32.326 [2024-05-12 05:00:39.419433] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:20:32.326 [2024-05-12 05:00:39.419443] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.583 [2024-05-12 05:00:39.461934] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.583 [2024-05-12 05:00:39.461996] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:32.583 [2024-05-12 05:00:39.462029] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.429 ms 00:20:32.583 [2024-05-12 05:00:39.462040] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.583 [2024-05-12 05:00:39.462101] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.583 [2024-05-12 05:00:39.462118] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:32.583 [2024-05-12 05:00:39.462129] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:32.583 [2024-05-12 05:00:39.462139] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.583 [2024-05-12 05:00:39.462561] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.583 [2024-05-12 05:00:39.462597] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:32.583 [2024-05-12 05:00:39.462610] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.360 ms 00:20:32.583 [2024-05-12 05:00:39.462621] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.583 [2024-05-12 05:00:39.462773] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.583 [2024-05-12 05:00:39.462797] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:32.583 [2024-05-12 05:00:39.462810] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:20:32.583 [2024-05-12 05:00:39.462820] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.583 [2024-05-12 05:00:39.477080] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.583 [2024-05-12 05:00:39.477117] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:32.583 [2024-05-12 05:00:39.477149] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.234 ms 00:20:32.583 [2024-05-12 05:00:39.477159] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.583 [2024-05-12 05:00:39.490665] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:32.583 [2024-05-12 05:00:39.490702] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:32.583 [2024-05-12 05:00:39.490734] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.583 [2024-05-12 05:00:39.490744] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:32.584 [2024-05-12 05:00:39.490755] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.421 ms 00:20:32.584 [2024-05-12 05:00:39.490765] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.584 [2024-05-12 05:00:39.514595] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.584 [2024-05-12 05:00:39.514633] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:32.584 [2024-05-12 05:00:39.514663] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.789 ms 00:20:32.584 [2024-05-12 05:00:39.514674] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.584 [2024-05-12 05:00:39.527783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.584 [2024-05-12 05:00:39.527819] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:32.584 [2024-05-12 05:00:39.527848] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.061 ms 00:20:32.584 [2024-05-12 05:00:39.527857] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.584 [2024-05-12 05:00:39.541173] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.584 [2024-05-12 05:00:39.541207] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:32.584 [2024-05-12 05:00:39.541280] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.277 ms 00:20:32.584 [2024-05-12 05:00:39.541291] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.584 [2024-05-12 05:00:39.541766] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.584 [2024-05-12 05:00:39.541805] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:32.584 [2024-05-12 05:00:39.541822] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.373 ms 00:20:32.584 [2024-05-12 05:00:39.541833] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.584 [2024-05-12 05:00:39.614930] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.584 [2024-05-12 05:00:39.614984] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:32.584 [2024-05-12 05:00:39.615018] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.073 ms 00:20:32.584 [2024-05-12 05:00:39.615029] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.584 [2024-05-12 05:00:39.626039] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:32.584 [2024-05-12 05:00:39.628508] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.584 [2024-05-12 05:00:39.628539] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:32.584 [2024-05-12 05:00:39.628569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.421 ms 00:20:32.584 [2024-05-12 05:00:39.628580] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.584 [2024-05-12 05:00:39.628673] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.584 [2024-05-12 05:00:39.628694] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:32.584 [2024-05-12 05:00:39.628707] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:32.584 [2024-05-12 05:00:39.628717] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.584 [2024-05-12 05:00:39.628792] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.584 [2024-05-12 05:00:39.628810] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:32.584 [2024-05-12 05:00:39.628821] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:20:32.584 [2024-05-12 05:00:39.628831] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.584 [2024-05-12 05:00:39.630524] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.584 [2024-05-12 05:00:39.630559] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:20:32.584 [2024-05-12 05:00:39.630593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.667 ms 00:20:32.584 [2024-05-12 05:00:39.630618] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.584 [2024-05-12 05:00:39.630659] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.584 [2024-05-12 05:00:39.630673] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:32.584 [2024-05-12 05:00:39.630683] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:32.584 [2024-05-12 05:00:39.630693] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.584 [2024-05-12 05:00:39.630738] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:32.584 [2024-05-12 05:00:39.630753] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.584 [2024-05-12 05:00:39.630763] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:32.584 [2024-05-12 05:00:39.630774] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:20:32.584 [2024-05-12 05:00:39.630788] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.584 [2024-05-12 05:00:39.658515] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.584 [2024-05-12 05:00:39.658554] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:32.584 [2024-05-12 05:00:39.658584] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.706 ms 00:20:32.584 [2024-05-12 05:00:39.658595] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.584 [2024-05-12 05:00:39.658667] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.584 [2024-05-12 05:00:39.658691] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:32.584 [2024-05-12 05:00:39.658703] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:20:32.584 [2024-05-12 05:00:39.658713] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.584 [2024-05-12 05:00:39.660016] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 283.708 ms, result 0 00:21:16.825  Copying: 22/1024 [MB] (22 MBps) Copying: 44/1024 [MB] (22 MBps) Copying: 67/1024 [MB] (22 MBps) Copying: 90/1024 [MB] (22 MBps) Copying: 112/1024 [MB] (22 MBps) Copying: 135/1024 [MB] (22 MBps) Copying: 158/1024 [MB] (22 MBps) Copying: 180/1024 [MB] (22 MBps) Copying: 202/1024 [MB] (22 MBps) Copying: 226/1024 [MB] (23 MBps) Copying: 249/1024 [MB] (23 MBps) Copying: 272/1024 [MB] (23 MBps) Copying: 296/1024 [MB] (23 MBps) Copying: 320/1024 [MB] (23 MBps) Copying: 343/1024 [MB] (23 MBps) Copying: 366/1024 [MB] (23 MBps) Copying: 390/1024 [MB] (23 MBps) Copying: 413/1024 [MB] (23 MBps) Copying: 437/1024 [MB] (23 MBps) Copying: 460/1024 [MB] (23 MBps) Copying: 485/1024 [MB] (24 MBps) Copying: 508/1024 [MB] (23 MBps) Copying: 532/1024 [MB] (23 MBps) Copying: 556/1024 [MB] (24 MBps) Copying: 580/1024 [MB] (24 MBps) Copying: 604/1024 [MB] (24 MBps) Copying: 628/1024 [MB] (23 MBps) Copying: 652/1024 [MB] (23 MBps) Copying: 676/1024 [MB] (23 MBps) Copying: 700/1024 [MB] (23 MBps) Copying: 723/1024 [MB] (23 MBps) Copying: 746/1024 [MB] (23 MBps) Copying: 769/1024 [MB] (23 MBps) Copying: 793/1024 [MB] (23 MBps) Copying: 816/1024 [MB] (23 MBps) Copying: 839/1024 [MB] (23 MBps) Copying: 863/1024 [MB] (23 MBps) Copying: 887/1024 [MB] (23 MBps) Copying: 910/1024 [MB] (23 MBps) Copying: 932/1024 [MB] (22 MBps) Copying: 956/1024 [MB] (23 MBps) Copying: 980/1024 [MB] (23 MBps) Copying: 1004/1024 [MB] (23 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-05-12 05:01:23.793267] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.825 [2024-05-12 05:01:23.793337] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:16.825 [2024-05-12 05:01:23.793363] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:16.825 [2024-05-12 05:01:23.793391] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.825 [2024-05-12 05:01:23.793431] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:16.825 [2024-05-12 05:01:23.797647] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.825 [2024-05-12 05:01:23.797694] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:16.825 [2024-05-12 05:01:23.797714] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.186 ms 00:21:16.825 [2024-05-12 05:01:23.797738] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.825 [2024-05-12 05:01:23.798074] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.825 [2024-05-12 05:01:23.798100] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:16.825 [2024-05-12 05:01:23.798117] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:21:16.825 [2024-05-12 05:01:23.798132] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.825 [2024-05-12 05:01:23.802718] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.825 [2024-05-12 05:01:23.802749] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:16.825 [2024-05-12 05:01:23.802778] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.561 ms 00:21:16.825 [2024-05-12 05:01:23.802788] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.825 [2024-05-12 05:01:23.808741] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.825 [2024-05-12 05:01:23.808769] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:21:16.825 [2024-05-12 05:01:23.808798] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.927 ms 00:21:16.825 [2024-05-12 05:01:23.808807] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.825 [2024-05-12 05:01:23.833682] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.825 [2024-05-12 05:01:23.833719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:16.825 [2024-05-12 05:01:23.833734] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.802 ms 00:21:16.825 [2024-05-12 05:01:23.833743] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.825 [2024-05-12 05:01:23.848176] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.825 [2024-05-12 05:01:23.848212] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:16.825 [2024-05-12 05:01:23.848255] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.411 ms 00:21:16.825 [2024-05-12 05:01:23.848266] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.825 [2024-05-12 05:01:23.848432] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.825 [2024-05-12 05:01:23.848457] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:16.825 [2024-05-12 05:01:23.848468] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:21:16.825 [2024-05-12 05:01:23.848478] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.825 [2024-05-12 05:01:23.873186] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.825 [2024-05-12 05:01:23.873245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:16.825 [2024-05-12 05:01:23.873276] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.679 ms 00:21:16.825 [2024-05-12 05:01:23.873286] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.825 [2024-05-12 05:01:23.897912] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.825 [2024-05-12 05:01:23.897947] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:16.826 [2024-05-12 05:01:23.897977] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.604 ms 00:21:16.826 [2024-05-12 05:01:23.897987] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.826 [2024-05-12 05:01:23.922470] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.826 [2024-05-12 05:01:23.922507] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:16.826 [2024-05-12 05:01:23.922522] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.462 ms 00:21:16.826 [2024-05-12 05:01:23.922532] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.826 [2024-05-12 05:01:23.946549] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.826 [2024-05-12 05:01:23.946607] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:16.826 [2024-05-12 05:01:23.946644] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.960 ms 00:21:16.826 [2024-05-12 05:01:23.946660] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.826 [2024-05-12 05:01:23.946690] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:16.826 [2024-05-12 05:01:23.946709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.946722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.946732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.946742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.946753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.946764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.946777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.946794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.946812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.946826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.946837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.946847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.946858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.946868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.946878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.946888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.946899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.946909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.946919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.946929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.946940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.946950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.946960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.946971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.946981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.946991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.947003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.947013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.947024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.947034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.947061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.947071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.947082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.947092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.947102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.947112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.947123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.947133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.947143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.947153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.947178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.947189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.947199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.947209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.947219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.947230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.947240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.947250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.947260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.947270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.947316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.947329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.947340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.947350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.947361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.947372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.947383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.947411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.947421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.947432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.947444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.947469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.947479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.947490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.947502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.947512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:16.826 [2024-05-12 05:01:23.947523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:16.827 [2024-05-12 05:01:23.947533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:16.827 [2024-05-12 05:01:23.947548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:16.827 [2024-05-12 05:01:23.947565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:16.827 [2024-05-12 05:01:23.947583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:16.827 [2024-05-12 05:01:23.947600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:16.827 [2024-05-12 05:01:23.947630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:16.827 [2024-05-12 05:01:23.947641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:16.827 [2024-05-12 05:01:23.947652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:16.827 [2024-05-12 05:01:23.947677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:16.827 [2024-05-12 05:01:23.947704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:16.827 [2024-05-12 05:01:23.947716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:16.827 [2024-05-12 05:01:23.947727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:16.827 [2024-05-12 05:01:23.947753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:16.827 [2024-05-12 05:01:23.947780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:16.827 [2024-05-12 05:01:23.947791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:16.827 [2024-05-12 05:01:23.947803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:16.827 [2024-05-12 05:01:23.947814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:16.827 [2024-05-12 05:01:23.947825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:16.827 [2024-05-12 05:01:23.947837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:16.827 [2024-05-12 05:01:23.947863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:16.827 [2024-05-12 05:01:23.947874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:16.827 [2024-05-12 05:01:23.947885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:16.827 [2024-05-12 05:01:23.947896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:16.827 [2024-05-12 05:01:23.947907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:16.827 [2024-05-12 05:01:23.947918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:16.827 [2024-05-12 05:01:23.947929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:16.827 [2024-05-12 05:01:23.947940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:16.827 [2024-05-12 05:01:23.947959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:16.827 [2024-05-12 05:01:23.947970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:16.827 [2024-05-12 05:01:23.947983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:16.827 [2024-05-12 05:01:23.947994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:16.827 [2024-05-12 05:01:23.948005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:16.827 [2024-05-12 05:01:23.948016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:16.827 [2024-05-12 05:01:23.948062] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:16.827 [2024-05-12 05:01:23.948076] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 76b9a96f-c2ae-494a-8e67-433e8d3249a5 00:21:16.827 [2024-05-12 05:01:23.948094] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:16.827 [2024-05-12 05:01:23.948105] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:16.827 [2024-05-12 05:01:23.948116] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:16.827 [2024-05-12 05:01:23.948127] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:16.827 [2024-05-12 05:01:23.948137] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:16.827 [2024-05-12 05:01:23.948162] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:16.827 [2024-05-12 05:01:23.948174] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:16.827 [2024-05-12 05:01:23.948184] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:16.827 [2024-05-12 05:01:23.948193] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:16.827 [2024-05-12 05:01:23.948204] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.827 [2024-05-12 05:01:23.948214] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:16.827 [2024-05-12 05:01:23.948225] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.515 ms 00:21:16.827 [2024-05-12 05:01:23.948259] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.086 [2024-05-12 05:01:23.962089] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.086 [2024-05-12 05:01:23.962122] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:17.086 [2024-05-12 05:01:23.962136] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.789 ms 00:21:17.086 [2024-05-12 05:01:23.962146] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.086 [2024-05-12 05:01:23.962397] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.086 [2024-05-12 05:01:23.962415] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:17.086 [2024-05-12 05:01:23.962426] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:21:17.086 [2024-05-12 05:01:23.962443] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.086 [2024-05-12 05:01:23.997620] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:17.086 [2024-05-12 05:01:23.997656] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:17.086 [2024-05-12 05:01:23.997670] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:17.086 [2024-05-12 05:01:23.997680] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.086 [2024-05-12 05:01:23.997732] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:17.086 [2024-05-12 05:01:23.997745] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:17.086 [2024-05-12 05:01:23.997755] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:17.086 [2024-05-12 05:01:23.997770] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.086 [2024-05-12 05:01:23.997841] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:17.086 [2024-05-12 05:01:23.997858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:17.086 [2024-05-12 05:01:23.997868] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:17.086 [2024-05-12 05:01:23.997878] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.086 [2024-05-12 05:01:23.997896] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:17.086 [2024-05-12 05:01:23.997907] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:17.086 [2024-05-12 05:01:23.997917] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:17.086 [2024-05-12 05:01:23.997926] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.086 [2024-05-12 05:01:24.072174] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:17.086 [2024-05-12 05:01:24.072233] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:17.086 [2024-05-12 05:01:24.072265] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:17.086 [2024-05-12 05:01:24.072276] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.086 [2024-05-12 05:01:24.101987] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:17.086 [2024-05-12 05:01:24.102022] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:17.086 [2024-05-12 05:01:24.102036] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:17.086 [2024-05-12 05:01:24.102046] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.086 [2024-05-12 05:01:24.102122] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:17.086 [2024-05-12 05:01:24.102137] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:17.086 [2024-05-12 05:01:24.102147] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:17.086 [2024-05-12 05:01:24.102156] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.086 [2024-05-12 05:01:24.102199] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:17.086 [2024-05-12 05:01:24.102213] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:17.086 [2024-05-12 05:01:24.102261] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:17.087 [2024-05-12 05:01:24.102271] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.087 [2024-05-12 05:01:24.102388] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:17.087 [2024-05-12 05:01:24.102409] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:17.087 [2024-05-12 05:01:24.102420] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:17.087 [2024-05-12 05:01:24.102429] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.087 [2024-05-12 05:01:24.102477] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:17.087 [2024-05-12 05:01:24.102507] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:17.087 [2024-05-12 05:01:24.102534] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:17.087 [2024-05-12 05:01:24.102560] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.087 [2024-05-12 05:01:24.102600] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:17.087 [2024-05-12 05:01:24.102620] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:17.087 [2024-05-12 05:01:24.102631] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:17.087 [2024-05-12 05:01:24.102642] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.087 [2024-05-12 05:01:24.102702] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:17.087 [2024-05-12 05:01:24.102717] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:17.087 [2024-05-12 05:01:24.102728] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:17.087 [2024-05-12 05:01:24.102738] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.087 [2024-05-12 05:01:24.102860] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 309.605 ms, result 0 00:21:18.021 00:21:18.021 00:21:18.021 05:01:24 -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:19.924 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:21:19.924 05:01:26 -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:21:19.924 [2024-05-12 05:01:26.834625] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:19.924 [2024-05-12 05:01:26.835019] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75301 ] 00:21:19.924 [2024-05-12 05:01:27.007374] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.183 [2024-05-12 05:01:27.205706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:20.441 [2024-05-12 05:01:27.452598] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:20.441 [2024-05-12 05:01:27.452671] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:20.700 [2024-05-12 05:01:27.600011] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.701 [2024-05-12 05:01:27.600108] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:20.701 [2024-05-12 05:01:27.600143] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:20.701 [2024-05-12 05:01:27.600154] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.701 [2024-05-12 05:01:27.600217] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.701 [2024-05-12 05:01:27.600234] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:20.701 [2024-05-12 05:01:27.600480] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:21:20.701 [2024-05-12 05:01:27.600536] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.701 [2024-05-12 05:01:27.600593] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:20.701 [2024-05-12 05:01:27.601541] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:20.701 [2024-05-12 05:01:27.601582] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.701 [2024-05-12 05:01:27.601595] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:20.701 [2024-05-12 05:01:27.601607] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.998 ms 00:21:20.701 [2024-05-12 05:01:27.601617] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.701 [2024-05-12 05:01:27.602794] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:20.701 [2024-05-12 05:01:27.616158] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.701 [2024-05-12 05:01:27.616199] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:20.701 [2024-05-12 05:01:27.616281] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.366 ms 00:21:20.701 [2024-05-12 05:01:27.616294] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.701 [2024-05-12 05:01:27.616373] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.701 [2024-05-12 05:01:27.616418] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:20.701 [2024-05-12 05:01:27.616443] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:21:20.701 [2024-05-12 05:01:27.616452] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.701 [2024-05-12 05:01:27.620645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.701 [2024-05-12 05:01:27.620679] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:20.701 [2024-05-12 05:01:27.620707] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.071 ms 00:21:20.701 [2024-05-12 05:01:27.620717] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.701 [2024-05-12 05:01:27.620806] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.701 [2024-05-12 05:01:27.620823] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:20.701 [2024-05-12 05:01:27.620833] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:21:20.701 [2024-05-12 05:01:27.620843] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.701 [2024-05-12 05:01:27.620892] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.701 [2024-05-12 05:01:27.620911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:20.701 [2024-05-12 05:01:27.620921] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:20.701 [2024-05-12 05:01:27.620930] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.701 [2024-05-12 05:01:27.620961] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:20.701 [2024-05-12 05:01:27.624619] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.701 [2024-05-12 05:01:27.624649] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:20.701 [2024-05-12 05:01:27.624662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.668 ms 00:21:20.701 [2024-05-12 05:01:27.624670] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.701 [2024-05-12 05:01:27.624704] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.701 [2024-05-12 05:01:27.624717] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:20.701 [2024-05-12 05:01:27.624727] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:20.701 [2024-05-12 05:01:27.624735] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.701 [2024-05-12 05:01:27.624758] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:20.701 [2024-05-12 05:01:27.624780] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:21:20.701 [2024-05-12 05:01:27.624815] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:20.701 [2024-05-12 05:01:27.624830] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:21:20.701 [2024-05-12 05:01:27.624892] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:21:20.701 [2024-05-12 05:01:27.624905] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:20.701 [2024-05-12 05:01:27.624916] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:21:20.701 [2024-05-12 05:01:27.624928] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:20.701 [2024-05-12 05:01:27.624938] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:20.701 [2024-05-12 05:01:27.624952] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:20.701 [2024-05-12 05:01:27.624960] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:20.701 [2024-05-12 05:01:27.624968] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:21:20.701 [2024-05-12 05:01:27.624977] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:21:20.701 [2024-05-12 05:01:27.624985] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.701 [2024-05-12 05:01:27.624994] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:20.701 [2024-05-12 05:01:27.625003] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:21:20.701 [2024-05-12 05:01:27.625012] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.701 [2024-05-12 05:01:27.625081] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.701 [2024-05-12 05:01:27.625095] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:20.701 [2024-05-12 05:01:27.625107] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:21:20.701 [2024-05-12 05:01:27.625116] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.701 [2024-05-12 05:01:27.625181] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:20.701 [2024-05-12 05:01:27.625194] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:20.701 [2024-05-12 05:01:27.625204] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:20.701 [2024-05-12 05:01:27.625213] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:20.701 [2024-05-12 05:01:27.625251] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:20.701 [2024-05-12 05:01:27.625260] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:20.701 [2024-05-12 05:01:27.625268] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:20.701 [2024-05-12 05:01:27.625279] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:20.701 [2024-05-12 05:01:27.625287] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:20.701 [2024-05-12 05:01:27.625296] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:20.701 [2024-05-12 05:01:27.625304] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:20.701 [2024-05-12 05:01:27.625313] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:20.701 [2024-05-12 05:01:27.625321] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:20.701 [2024-05-12 05:01:27.625329] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:20.701 [2024-05-12 05:01:27.625338] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:21:20.701 [2024-05-12 05:01:27.625346] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:20.701 [2024-05-12 05:01:27.625354] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:20.701 [2024-05-12 05:01:27.625368] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:21:20.701 [2024-05-12 05:01:27.625376] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:20.701 [2024-05-12 05:01:27.625384] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:21:20.701 [2024-05-12 05:01:27.625393] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:21:20.701 [2024-05-12 05:01:27.625413] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:21:20.701 [2024-05-12 05:01:27.625422] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:20.701 [2024-05-12 05:01:27.625431] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:20.701 [2024-05-12 05:01:27.625438] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:20.701 [2024-05-12 05:01:27.625446] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:20.701 [2024-05-12 05:01:27.625455] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:21:20.701 [2024-05-12 05:01:27.625463] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:20.701 [2024-05-12 05:01:27.625471] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:20.701 [2024-05-12 05:01:27.625479] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:20.701 [2024-05-12 05:01:27.625487] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:20.701 [2024-05-12 05:01:27.625495] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:20.701 [2024-05-12 05:01:27.625503] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:21:20.701 [2024-05-12 05:01:27.625511] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:20.701 [2024-05-12 05:01:27.625518] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:20.701 [2024-05-12 05:01:27.625527] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:20.701 [2024-05-12 05:01:27.625534] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:20.701 [2024-05-12 05:01:27.625558] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:20.701 [2024-05-12 05:01:27.625582] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:21:20.701 [2024-05-12 05:01:27.625590] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:20.701 [2024-05-12 05:01:27.625599] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:20.701 [2024-05-12 05:01:27.625609] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:20.701 [2024-05-12 05:01:27.625618] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:20.701 [2024-05-12 05:01:27.625627] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:20.702 [2024-05-12 05:01:27.625657] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:20.702 [2024-05-12 05:01:27.625666] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:20.702 [2024-05-12 05:01:27.625675] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:20.702 [2024-05-12 05:01:27.625684] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:20.702 [2024-05-12 05:01:27.625693] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:20.702 [2024-05-12 05:01:27.625702] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:20.702 [2024-05-12 05:01:27.625713] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:20.702 [2024-05-12 05:01:27.625725] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:20.702 [2024-05-12 05:01:27.625736] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:20.702 [2024-05-12 05:01:27.625746] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:21:20.702 [2024-05-12 05:01:27.625756] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:21:20.702 [2024-05-12 05:01:27.625766] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:21:20.702 [2024-05-12 05:01:27.625776] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:21:20.702 [2024-05-12 05:01:27.625786] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:21:20.702 [2024-05-12 05:01:27.625796] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:21:20.702 [2024-05-12 05:01:27.625806] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:21:20.702 [2024-05-12 05:01:27.625816] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:21:20.702 [2024-05-12 05:01:27.625826] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:21:20.702 [2024-05-12 05:01:27.625835] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:21:20.702 [2024-05-12 05:01:27.625846] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:21:20.702 [2024-05-12 05:01:27.625856] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:21:20.702 [2024-05-12 05:01:27.625867] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:20.702 [2024-05-12 05:01:27.625877] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:20.702 [2024-05-12 05:01:27.625888] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:20.702 [2024-05-12 05:01:27.625898] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:20.702 [2024-05-12 05:01:27.625908] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:20.702 [2024-05-12 05:01:27.625917] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:20.702 [2024-05-12 05:01:27.625928] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.702 [2024-05-12 05:01:27.625938] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:20.702 [2024-05-12 05:01:27.625948] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.781 ms 00:21:20.702 [2024-05-12 05:01:27.625958] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.702 [2024-05-12 05:01:27.640924] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.702 [2024-05-12 05:01:27.640962] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:20.702 [2024-05-12 05:01:27.640977] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.917 ms 00:21:20.702 [2024-05-12 05:01:27.640986] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.702 [2024-05-12 05:01:27.641062] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.702 [2024-05-12 05:01:27.641074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:20.702 [2024-05-12 05:01:27.641089] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:21:20.702 [2024-05-12 05:01:27.641097] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.702 [2024-05-12 05:01:27.681045] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.702 [2024-05-12 05:01:27.681086] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:20.702 [2024-05-12 05:01:27.681101] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.891 ms 00:21:20.702 [2024-05-12 05:01:27.681111] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.702 [2024-05-12 05:01:27.681159] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.702 [2024-05-12 05:01:27.681173] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:20.702 [2024-05-12 05:01:27.681182] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:20.702 [2024-05-12 05:01:27.681191] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.702 [2024-05-12 05:01:27.681595] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.702 [2024-05-12 05:01:27.681620] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:20.702 [2024-05-12 05:01:27.681632] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:21:20.702 [2024-05-12 05:01:27.681642] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.702 [2024-05-12 05:01:27.681778] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.702 [2024-05-12 05:01:27.681795] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:20.702 [2024-05-12 05:01:27.681807] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:21:20.702 [2024-05-12 05:01:27.681817] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.702 [2024-05-12 05:01:27.695522] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.702 [2024-05-12 05:01:27.695555] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:20.702 [2024-05-12 05:01:27.695569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.667 ms 00:21:20.702 [2024-05-12 05:01:27.695578] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.702 [2024-05-12 05:01:27.708549] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:20.702 [2024-05-12 05:01:27.708585] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:20.702 [2024-05-12 05:01:27.708600] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.702 [2024-05-12 05:01:27.708609] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:20.702 [2024-05-12 05:01:27.708620] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.918 ms 00:21:20.702 [2024-05-12 05:01:27.708628] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.702 [2024-05-12 05:01:27.731676] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.702 [2024-05-12 05:01:27.731710] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:20.702 [2024-05-12 05:01:27.731725] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.010 ms 00:21:20.702 [2024-05-12 05:01:27.731734] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.702 [2024-05-12 05:01:27.744073] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.702 [2024-05-12 05:01:27.744108] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:20.702 [2024-05-12 05:01:27.744137] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.300 ms 00:21:20.702 [2024-05-12 05:01:27.744147] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.702 [2024-05-12 05:01:27.756295] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.702 [2024-05-12 05:01:27.756328] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:20.702 [2024-05-12 05:01:27.756371] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.112 ms 00:21:20.702 [2024-05-12 05:01:27.756380] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.702 [2024-05-12 05:01:27.756743] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.702 [2024-05-12 05:01:27.756763] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:20.702 [2024-05-12 05:01:27.756774] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:21:20.702 [2024-05-12 05:01:27.756782] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.702 [2024-05-12 05:01:27.814804] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.702 [2024-05-12 05:01:27.814860] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:20.702 [2024-05-12 05:01:27.814893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.001 ms 00:21:20.702 [2024-05-12 05:01:27.814904] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.702 [2024-05-12 05:01:27.825147] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:20.962 [2024-05-12 05:01:27.827447] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.962 [2024-05-12 05:01:27.827485] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:20.962 [2024-05-12 05:01:27.827516] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.493 ms 00:21:20.962 [2024-05-12 05:01:27.827526] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.962 [2024-05-12 05:01:27.827620] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.962 [2024-05-12 05:01:27.827639] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:20.962 [2024-05-12 05:01:27.827650] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:20.962 [2024-05-12 05:01:27.827660] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.962 [2024-05-12 05:01:27.827744] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.962 [2024-05-12 05:01:27.827759] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:20.962 [2024-05-12 05:01:27.827769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:21:20.962 [2024-05-12 05:01:27.827778] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.962 [2024-05-12 05:01:27.829746] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.962 [2024-05-12 05:01:27.829778] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:21:20.962 [2024-05-12 05:01:27.829810] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.930 ms 00:21:20.962 [2024-05-12 05:01:27.829819] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.962 [2024-05-12 05:01:27.829845] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.962 [2024-05-12 05:01:27.829856] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:20.962 [2024-05-12 05:01:27.829866] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:20.962 [2024-05-12 05:01:27.829875] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.962 [2024-05-12 05:01:27.829918] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:20.962 [2024-05-12 05:01:27.829932] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.962 [2024-05-12 05:01:27.829941] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:20.962 [2024-05-12 05:01:27.829950] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:21:20.962 [2024-05-12 05:01:27.829962] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.962 [2024-05-12 05:01:27.857613] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.962 [2024-05-12 05:01:27.857680] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:20.962 [2024-05-12 05:01:27.857711] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.631 ms 00:21:20.962 [2024-05-12 05:01:27.857721] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.962 [2024-05-12 05:01:27.857789] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.962 [2024-05-12 05:01:27.857810] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:20.962 [2024-05-12 05:01:27.857821] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:21:20.962 [2024-05-12 05:01:27.857830] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.962 [2024-05-12 05:01:27.859075] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 258.496 ms, result 0 00:22:05.379  Copying: 23/1024 [MB] (23 MBps) Copying: 46/1024 [MB] (22 MBps) Copying: 69/1024 [MB] (23 MBps) Copying: 93/1024 [MB] (24 MBps) Copying: 117/1024 [MB] (24 MBps) Copying: 141/1024 [MB] (23 MBps) Copying: 165/1024 [MB] (23 MBps) Copying: 188/1024 [MB] (23 MBps) Copying: 212/1024 [MB] (23 MBps) Copying: 235/1024 [MB] (23 MBps) Copying: 259/1024 [MB] (23 MBps) Copying: 282/1024 [MB] (22 MBps) Copying: 306/1024 [MB] (23 MBps) Copying: 329/1024 [MB] (23 MBps) Copying: 353/1024 [MB] (23 MBps) Copying: 377/1024 [MB] (23 MBps) Copying: 400/1024 [MB] (23 MBps) Copying: 423/1024 [MB] (23 MBps) Copying: 447/1024 [MB] (23 MBps) Copying: 470/1024 [MB] (23 MBps) Copying: 493/1024 [MB] (23 MBps) Copying: 517/1024 [MB] (23 MBps) Copying: 541/1024 [MB] (24 MBps) Copying: 565/1024 [MB] (24 MBps) Copying: 589/1024 [MB] (23 MBps) Copying: 612/1024 [MB] (23 MBps) Copying: 636/1024 [MB] (23 MBps) Copying: 659/1024 [MB] (23 MBps) Copying: 683/1024 [MB] (23 MBps) Copying: 706/1024 [MB] (23 MBps) Copying: 730/1024 [MB] (23 MBps) Copying: 753/1024 [MB] (23 MBps) Copying: 777/1024 [MB] (23 MBps) Copying: 800/1024 [MB] (23 MBps) Copying: 824/1024 [MB] (23 MBps) Copying: 848/1024 [MB] (23 MBps) Copying: 871/1024 [MB] (23 MBps) Copying: 894/1024 [MB] (23 MBps) Copying: 918/1024 [MB] (23 MBps) Copying: 942/1024 [MB] (23 MBps) Copying: 965/1024 [MB] (23 MBps) Copying: 988/1024 [MB] (23 MBps) Copying: 1011/1024 [MB] (22 MBps) Copying: 1023/1024 [MB] (12 MBps) Copying: 1024/1024 [MB] (average 22 MBps)[2024-05-12 05:02:12.434607] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.379 [2024-05-12 05:02:12.434700] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:05.379 [2024-05-12 05:02:12.434737] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:05.379 [2024-05-12 05:02:12.434775] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.379 [2024-05-12 05:02:12.437332] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:05.379 [2024-05-12 05:02:12.443367] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.379 [2024-05-12 05:02:12.443402] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:05.379 [2024-05-12 05:02:12.443433] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.947 ms 00:22:05.379 [2024-05-12 05:02:12.443443] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.379 [2024-05-12 05:02:12.455147] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.379 [2024-05-12 05:02:12.455200] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:05.379 [2024-05-12 05:02:12.455261] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.756 ms 00:22:05.379 [2024-05-12 05:02:12.455273] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.379 [2024-05-12 05:02:12.475852] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.379 [2024-05-12 05:02:12.475887] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:05.379 [2024-05-12 05:02:12.475918] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.550 ms 00:22:05.379 [2024-05-12 05:02:12.475928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.379 [2024-05-12 05:02:12.481385] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.379 [2024-05-12 05:02:12.481415] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:22:05.379 [2024-05-12 05:02:12.481443] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.424 ms 00:22:05.379 [2024-05-12 05:02:12.481454] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.638 [2024-05-12 05:02:12.506571] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.638 [2024-05-12 05:02:12.506610] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:05.638 [2024-05-12 05:02:12.506655] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.064 ms 00:22:05.638 [2024-05-12 05:02:12.506679] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.638 [2024-05-12 05:02:12.521645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.638 [2024-05-12 05:02:12.521686] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:05.638 [2024-05-12 05:02:12.521701] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.928 ms 00:22:05.638 [2024-05-12 05:02:12.521710] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.638 [2024-05-12 05:02:12.625802] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.638 [2024-05-12 05:02:12.625841] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:05.639 [2024-05-12 05:02:12.625857] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.054 ms 00:22:05.639 [2024-05-12 05:02:12.625867] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.639 [2024-05-12 05:02:12.651733] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.639 [2024-05-12 05:02:12.651788] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:05.639 [2024-05-12 05:02:12.651804] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.847 ms 00:22:05.639 [2024-05-12 05:02:12.651814] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.639 [2024-05-12 05:02:12.677385] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.639 [2024-05-12 05:02:12.677422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:05.639 [2024-05-12 05:02:12.677436] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.533 ms 00:22:05.639 [2024-05-12 05:02:12.677445] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.639 [2024-05-12 05:02:12.702298] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.639 [2024-05-12 05:02:12.702333] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:05.639 [2024-05-12 05:02:12.702347] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.803 ms 00:22:05.639 [2024-05-12 05:02:12.702356] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.639 [2024-05-12 05:02:12.727426] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.639 [2024-05-12 05:02:12.727460] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:05.639 [2024-05-12 05:02:12.727473] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.999 ms 00:22:05.639 [2024-05-12 05:02:12.727482] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.639 [2024-05-12 05:02:12.727517] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:05.639 [2024-05-12 05:02:12.727535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 116992 / 261120 wr_cnt: 1 state: open 00:22:05.639 [2024-05-12 05:02:12.727546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.727555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.727564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.727573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.727582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.727591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.727600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.727609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.727619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.727628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.727637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.727646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.727655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.727663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.727672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.727681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.727690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.727699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.727708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.727717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.727725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.727734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.727743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.727752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.727761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.727778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.727788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.727797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.727806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.727815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.727824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.727837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.727847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.727856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.727865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.727874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.727883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.727892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.727900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.727909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.727918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.727927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.727936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.727945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.727954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.727962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.727971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.727980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.727989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.727998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.728007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.728016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.728025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.728033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.728043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.728052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.728061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.728070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.728079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.728088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.728122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.728148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.728157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.728169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.728180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.728190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.728199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.728209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.728219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.728228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.728238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.728258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.728269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.728279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.728289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.728310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.728320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.728329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.728340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.728350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.728359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.728368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.728378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.728387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.728397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.728406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.728415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.728426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.728435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.728445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.728454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.728464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.728473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.728483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.728492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.728503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.728527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.728537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.728562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:05.639 [2024-05-12 05:02:12.728595] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:05.639 [2024-05-12 05:02:12.728605] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 76b9a96f-c2ae-494a-8e67-433e8d3249a5 00:22:05.639 [2024-05-12 05:02:12.728615] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 116992 00:22:05.639 [2024-05-12 05:02:12.728624] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 117952 00:22:05.639 [2024-05-12 05:02:12.728633] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 116992 00:22:05.639 [2024-05-12 05:02:12.728643] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0082 00:22:05.639 [2024-05-12 05:02:12.728652] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:05.639 [2024-05-12 05:02:12.728661] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:05.639 [2024-05-12 05:02:12.728676] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:05.639 [2024-05-12 05:02:12.728684] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:05.639 [2024-05-12 05:02:12.728693] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:05.639 [2024-05-12 05:02:12.728702] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.639 [2024-05-12 05:02:12.728712] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:05.639 [2024-05-12 05:02:12.728722] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.186 ms 00:22:05.639 [2024-05-12 05:02:12.728741] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.639 [2024-05-12 05:02:12.742010] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.639 [2024-05-12 05:02:12.742043] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:05.639 [2024-05-12 05:02:12.742057] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.220 ms 00:22:05.639 [2024-05-12 05:02:12.742072] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.639 [2024-05-12 05:02:12.742288] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.639 [2024-05-12 05:02:12.742304] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:05.639 [2024-05-12 05:02:12.742315] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:22:05.639 [2024-05-12 05:02:12.742324] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.898 [2024-05-12 05:02:12.783044] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.898 [2024-05-12 05:02:12.783083] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:05.898 [2024-05-12 05:02:12.783102] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.898 [2024-05-12 05:02:12.783111] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.898 [2024-05-12 05:02:12.783162] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.898 [2024-05-12 05:02:12.783174] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:05.898 [2024-05-12 05:02:12.783183] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.898 [2024-05-12 05:02:12.783192] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.898 [2024-05-12 05:02:12.783319] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.898 [2024-05-12 05:02:12.783338] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:05.898 [2024-05-12 05:02:12.783350] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.898 [2024-05-12 05:02:12.783367] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.898 [2024-05-12 05:02:12.783389] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.898 [2024-05-12 05:02:12.783402] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:05.898 [2024-05-12 05:02:12.783414] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.898 [2024-05-12 05:02:12.783424] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.898 [2024-05-12 05:02:12.861787] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.898 [2024-05-12 05:02:12.861835] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:05.898 [2024-05-12 05:02:12.861857] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.898 [2024-05-12 05:02:12.861866] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.898 [2024-05-12 05:02:12.891608] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.898 [2024-05-12 05:02:12.891643] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:05.898 [2024-05-12 05:02:12.891657] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.898 [2024-05-12 05:02:12.891666] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.898 [2024-05-12 05:02:12.891737] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.898 [2024-05-12 05:02:12.891751] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:05.898 [2024-05-12 05:02:12.891761] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.898 [2024-05-12 05:02:12.891769] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.898 [2024-05-12 05:02:12.891820] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.898 [2024-05-12 05:02:12.891833] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:05.898 [2024-05-12 05:02:12.891842] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.898 [2024-05-12 05:02:12.891851] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.898 [2024-05-12 05:02:12.891943] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.898 [2024-05-12 05:02:12.891959] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:05.898 [2024-05-12 05:02:12.891969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.898 [2024-05-12 05:02:12.891977] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.898 [2024-05-12 05:02:12.892019] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.898 [2024-05-12 05:02:12.892033] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:05.898 [2024-05-12 05:02:12.892043] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.898 [2024-05-12 05:02:12.892051] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.898 [2024-05-12 05:02:12.892087] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.898 [2024-05-12 05:02:12.892125] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:05.898 [2024-05-12 05:02:12.892136] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.898 [2024-05-12 05:02:12.892146] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.898 [2024-05-12 05:02:12.892208] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.898 [2024-05-12 05:02:12.892224] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:05.898 [2024-05-12 05:02:12.892271] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.898 [2024-05-12 05:02:12.892282] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.898 [2024-05-12 05:02:12.892409] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 458.737 ms, result 0 00:22:07.269 00:22:07.269 00:22:07.269 05:02:14 -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:22:07.269 [2024-05-12 05:02:14.387313] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:07.269 [2024-05-12 05:02:14.387475] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75782 ] 00:22:07.528 [2024-05-12 05:02:14.550869] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.797 [2024-05-12 05:02:14.695424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:08.057 [2024-05-12 05:02:14.947821] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:08.057 [2024-05-12 05:02:14.947892] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:08.057 [2024-05-12 05:02:15.098348] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.057 [2024-05-12 05:02:15.098395] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:08.057 [2024-05-12 05:02:15.098415] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:08.057 [2024-05-12 05:02:15.098425] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.057 [2024-05-12 05:02:15.098485] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.057 [2024-05-12 05:02:15.098502] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:08.057 [2024-05-12 05:02:15.098513] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:22:08.057 [2024-05-12 05:02:15.098523] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.057 [2024-05-12 05:02:15.098550] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:08.057 [2024-05-12 05:02:15.099336] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:08.057 [2024-05-12 05:02:15.099364] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.057 [2024-05-12 05:02:15.099376] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:08.057 [2024-05-12 05:02:15.099386] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.820 ms 00:22:08.057 [2024-05-12 05:02:15.099411] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.057 [2024-05-12 05:02:15.100524] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:08.057 [2024-05-12 05:02:15.113658] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.057 [2024-05-12 05:02:15.113694] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:08.057 [2024-05-12 05:02:15.113714] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.135 ms 00:22:08.057 [2024-05-12 05:02:15.113724] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.057 [2024-05-12 05:02:15.113780] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.057 [2024-05-12 05:02:15.113797] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:08.057 [2024-05-12 05:02:15.113807] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:22:08.057 [2024-05-12 05:02:15.113816] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.057 [2024-05-12 05:02:15.118233] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.057 [2024-05-12 05:02:15.118278] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:08.057 [2024-05-12 05:02:15.118293] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.343 ms 00:22:08.057 [2024-05-12 05:02:15.118303] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.057 [2024-05-12 05:02:15.118423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.057 [2024-05-12 05:02:15.118440] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:08.057 [2024-05-12 05:02:15.118451] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:22:08.057 [2024-05-12 05:02:15.118460] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.057 [2024-05-12 05:02:15.118507] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.058 [2024-05-12 05:02:15.118526] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:08.058 [2024-05-12 05:02:15.118536] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:08.058 [2024-05-12 05:02:15.118545] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.058 [2024-05-12 05:02:15.118577] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:08.058 [2024-05-12 05:02:15.122411] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.058 [2024-05-12 05:02:15.122589] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:08.058 [2024-05-12 05:02:15.122708] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.845 ms 00:22:08.058 [2024-05-12 05:02:15.122862] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.058 [2024-05-12 05:02:15.122946] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.058 [2024-05-12 05:02:15.123041] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:08.058 [2024-05-12 05:02:15.123143] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:08.058 [2024-05-12 05:02:15.123188] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.058 [2024-05-12 05:02:15.123295] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:08.058 [2024-05-12 05:02:15.123366] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:22:08.058 [2024-05-12 05:02:15.123517] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:08.058 [2024-05-12 05:02:15.123743] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:22:08.058 [2024-05-12 05:02:15.123921] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:22:08.058 [2024-05-12 05:02:15.123942] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:08.058 [2024-05-12 05:02:15.123955] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:22:08.058 [2024-05-12 05:02:15.123967] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:08.058 [2024-05-12 05:02:15.123986] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:08.058 [2024-05-12 05:02:15.123996] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:08.058 [2024-05-12 05:02:15.124006] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:08.058 [2024-05-12 05:02:15.124014] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:22:08.058 [2024-05-12 05:02:15.124023] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:22:08.058 [2024-05-12 05:02:15.124034] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.058 [2024-05-12 05:02:15.124044] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:08.058 [2024-05-12 05:02:15.124054] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.742 ms 00:22:08.058 [2024-05-12 05:02:15.124063] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.058 [2024-05-12 05:02:15.124179] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.058 [2024-05-12 05:02:15.124199] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:08.058 [2024-05-12 05:02:15.124210] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:22:08.058 [2024-05-12 05:02:15.124219] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.058 [2024-05-12 05:02:15.124354] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:08.058 [2024-05-12 05:02:15.124374] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:08.058 [2024-05-12 05:02:15.124386] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:08.058 [2024-05-12 05:02:15.124397] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:08.058 [2024-05-12 05:02:15.124407] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:08.058 [2024-05-12 05:02:15.124431] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:08.058 [2024-05-12 05:02:15.124455] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:08.058 [2024-05-12 05:02:15.124464] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:08.058 [2024-05-12 05:02:15.124473] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:08.058 [2024-05-12 05:02:15.124482] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:08.058 [2024-05-12 05:02:15.124505] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:08.058 [2024-05-12 05:02:15.124513] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:08.058 [2024-05-12 05:02:15.124522] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:08.058 [2024-05-12 05:02:15.124530] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:08.058 [2024-05-12 05:02:15.124539] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:22:08.058 [2024-05-12 05:02:15.124547] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:08.058 [2024-05-12 05:02:15.124556] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:08.058 [2024-05-12 05:02:15.124564] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:22:08.058 [2024-05-12 05:02:15.124573] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:08.058 [2024-05-12 05:02:15.124583] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:22:08.058 [2024-05-12 05:02:15.124592] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:22:08.058 [2024-05-12 05:02:15.124613] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:22:08.058 [2024-05-12 05:02:15.124636] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:08.058 [2024-05-12 05:02:15.124645] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:08.058 [2024-05-12 05:02:15.124654] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:08.058 [2024-05-12 05:02:15.124663] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:08.058 [2024-05-12 05:02:15.124671] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:22:08.058 [2024-05-12 05:02:15.124679] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:08.058 [2024-05-12 05:02:15.124687] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:08.058 [2024-05-12 05:02:15.124695] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:08.058 [2024-05-12 05:02:15.124704] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:08.058 [2024-05-12 05:02:15.124712] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:08.058 [2024-05-12 05:02:15.124720] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:22:08.058 [2024-05-12 05:02:15.124728] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:08.058 [2024-05-12 05:02:15.124736] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:08.058 [2024-05-12 05:02:15.124744] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:08.058 [2024-05-12 05:02:15.124752] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:08.058 [2024-05-12 05:02:15.124761] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:08.058 [2024-05-12 05:02:15.124769] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:22:08.058 [2024-05-12 05:02:15.124777] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:08.058 [2024-05-12 05:02:15.124785] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:08.058 [2024-05-12 05:02:15.124795] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:08.058 [2024-05-12 05:02:15.124804] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:08.058 [2024-05-12 05:02:15.124817] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:08.058 [2024-05-12 05:02:15.124827] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:08.058 [2024-05-12 05:02:15.124835] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:08.058 [2024-05-12 05:02:15.124843] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:08.058 [2024-05-12 05:02:15.124852] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:08.058 [2024-05-12 05:02:15.124860] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:08.058 [2024-05-12 05:02:15.124869] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:08.058 [2024-05-12 05:02:15.124878] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:08.058 [2024-05-12 05:02:15.124889] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:08.058 [2024-05-12 05:02:15.124899] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:08.058 [2024-05-12 05:02:15.124908] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:22:08.058 [2024-05-12 05:02:15.124918] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:22:08.058 [2024-05-12 05:02:15.124927] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:22:08.058 [2024-05-12 05:02:15.124936] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:22:08.058 [2024-05-12 05:02:15.124946] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:22:08.058 [2024-05-12 05:02:15.124955] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:22:08.058 [2024-05-12 05:02:15.124964] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:22:08.058 [2024-05-12 05:02:15.124973] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:22:08.058 [2024-05-12 05:02:15.124982] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:22:08.058 [2024-05-12 05:02:15.124991] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:22:08.058 [2024-05-12 05:02:15.125000] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:22:08.058 [2024-05-12 05:02:15.125009] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:22:08.058 [2024-05-12 05:02:15.125018] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:08.058 [2024-05-12 05:02:15.125028] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:08.058 [2024-05-12 05:02:15.125038] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:08.058 [2024-05-12 05:02:15.125047] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:08.058 [2024-05-12 05:02:15.125056] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:08.058 [2024-05-12 05:02:15.125066] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:08.058 [2024-05-12 05:02:15.125076] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.058 [2024-05-12 05:02:15.125085] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:08.058 [2024-05-12 05:02:15.125094] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.763 ms 00:22:08.058 [2024-05-12 05:02:15.125103] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.058 [2024-05-12 05:02:15.140166] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.059 [2024-05-12 05:02:15.140209] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:08.059 [2024-05-12 05:02:15.140242] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.019 ms 00:22:08.059 [2024-05-12 05:02:15.140253] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.059 [2024-05-12 05:02:15.140356] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.059 [2024-05-12 05:02:15.140376] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:08.059 [2024-05-12 05:02:15.140387] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:22:08.059 [2024-05-12 05:02:15.140398] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.059 [2024-05-12 05:02:15.178570] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.059 [2024-05-12 05:02:15.178616] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:08.059 [2024-05-12 05:02:15.178633] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.113 ms 00:22:08.059 [2024-05-12 05:02:15.178648] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.059 [2024-05-12 05:02:15.178701] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.059 [2024-05-12 05:02:15.178716] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:08.059 [2024-05-12 05:02:15.178727] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:08.059 [2024-05-12 05:02:15.178737] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.059 [2024-05-12 05:02:15.179103] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.059 [2024-05-12 05:02:15.179120] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:08.059 [2024-05-12 05:02:15.179130] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:22:08.059 [2024-05-12 05:02:15.179140] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.059 [2024-05-12 05:02:15.179290] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.059 [2024-05-12 05:02:15.179322] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:08.059 [2024-05-12 05:02:15.179344] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:22:08.059 [2024-05-12 05:02:15.179367] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.318 [2024-05-12 05:02:15.194248] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.318 [2024-05-12 05:02:15.194284] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:08.318 [2024-05-12 05:02:15.194298] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.856 ms 00:22:08.318 [2024-05-12 05:02:15.194307] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.318 [2024-05-12 05:02:15.207734] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:22:08.318 [2024-05-12 05:02:15.207771] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:08.318 [2024-05-12 05:02:15.207787] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.318 [2024-05-12 05:02:15.207796] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:08.318 [2024-05-12 05:02:15.207807] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.380 ms 00:22:08.318 [2024-05-12 05:02:15.207815] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.318 [2024-05-12 05:02:15.232448] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.318 [2024-05-12 05:02:15.232484] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:08.318 [2024-05-12 05:02:15.232499] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.595 ms 00:22:08.319 [2024-05-12 05:02:15.232508] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.319 [2024-05-12 05:02:15.244783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.319 [2024-05-12 05:02:15.244818] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:08.319 [2024-05-12 05:02:15.244832] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.234 ms 00:22:08.319 [2024-05-12 05:02:15.244841] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.319 [2024-05-12 05:02:15.257033] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.319 [2024-05-12 05:02:15.257068] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:08.319 [2024-05-12 05:02:15.257081] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.157 ms 00:22:08.319 [2024-05-12 05:02:15.257090] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.319 [2024-05-12 05:02:15.257525] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.319 [2024-05-12 05:02:15.257554] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:08.319 [2024-05-12 05:02:15.257598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.343 ms 00:22:08.319 [2024-05-12 05:02:15.257608] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.319 [2024-05-12 05:02:15.316511] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.319 [2024-05-12 05:02:15.316571] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:08.319 [2024-05-12 05:02:15.316588] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.879 ms 00:22:08.319 [2024-05-12 05:02:15.316598] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.319 [2024-05-12 05:02:15.326279] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:08.319 [2024-05-12 05:02:15.328189] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.319 [2024-05-12 05:02:15.328261] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:08.319 [2024-05-12 05:02:15.328277] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.536 ms 00:22:08.319 [2024-05-12 05:02:15.328287] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.319 [2024-05-12 05:02:15.328371] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.319 [2024-05-12 05:02:15.328388] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:08.319 [2024-05-12 05:02:15.328400] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:08.319 [2024-05-12 05:02:15.328409] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.319 [2024-05-12 05:02:15.329428] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.319 [2024-05-12 05:02:15.329461] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:08.319 [2024-05-12 05:02:15.329474] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.960 ms 00:22:08.319 [2024-05-12 05:02:15.329483] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.319 [2024-05-12 05:02:15.330988] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.319 [2024-05-12 05:02:15.331020] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:22:08.319 [2024-05-12 05:02:15.331033] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.478 ms 00:22:08.319 [2024-05-12 05:02:15.331041] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.319 [2024-05-12 05:02:15.331071] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.319 [2024-05-12 05:02:15.331083] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:08.319 [2024-05-12 05:02:15.331093] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:08.319 [2024-05-12 05:02:15.331108] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.319 [2024-05-12 05:02:15.331144] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:08.319 [2024-05-12 05:02:15.331158] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.319 [2024-05-12 05:02:15.331167] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:08.319 [2024-05-12 05:02:15.331180] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:22:08.319 [2024-05-12 05:02:15.331188] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.319 [2024-05-12 05:02:15.355232] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.319 [2024-05-12 05:02:15.355269] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:08.319 [2024-05-12 05:02:15.355284] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.024 ms 00:22:08.319 [2024-05-12 05:02:15.355293] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.319 [2024-05-12 05:02:15.355359] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.319 [2024-05-12 05:02:15.355381] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:08.319 [2024-05-12 05:02:15.355391] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:22:08.319 [2024-05-12 05:02:15.355400] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.319 [2024-05-12 05:02:15.362274] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 262.266 ms, result 0 00:22:53.045  Copying: 21/1024 [MB] (21 MBps) Copying: 44/1024 [MB] (22 MBps) Copying: 66/1024 [MB] (22 MBps) Copying: 89/1024 [MB] (22 MBps) Copying: 112/1024 [MB] (23 MBps) Copying: 135/1024 [MB] (22 MBps) Copying: 158/1024 [MB] (23 MBps) Copying: 181/1024 [MB] (22 MBps) Copying: 204/1024 [MB] (22 MBps) Copying: 228/1024 [MB] (23 MBps) Copying: 251/1024 [MB] (22 MBps) Copying: 274/1024 [MB] (23 MBps) Copying: 297/1024 [MB] (23 MBps) Copying: 320/1024 [MB] (23 MBps) Copying: 343/1024 [MB] (23 MBps) Copying: 366/1024 [MB] (23 MBps) Copying: 389/1024 [MB] (23 MBps) Copying: 412/1024 [MB] (23 MBps) Copying: 436/1024 [MB] (23 MBps) Copying: 460/1024 [MB] (23 MBps) Copying: 483/1024 [MB] (23 MBps) Copying: 506/1024 [MB] (23 MBps) Copying: 529/1024 [MB] (23 MBps) Copying: 553/1024 [MB] (23 MBps) Copying: 576/1024 [MB] (23 MBps) Copying: 600/1024 [MB] (23 MBps) Copying: 623/1024 [MB] (23 MBps) Copying: 646/1024 [MB] (22 MBps) Copying: 669/1024 [MB] (22 MBps) Copying: 691/1024 [MB] (22 MBps) Copying: 714/1024 [MB] (23 MBps) Copying: 738/1024 [MB] (23 MBps) Copying: 762/1024 [MB] (23 MBps) Copying: 785/1024 [MB] (23 MBps) Copying: 808/1024 [MB] (23 MBps) Copying: 832/1024 [MB] (23 MBps) Copying: 855/1024 [MB] (22 MBps) Copying: 878/1024 [MB] (22 MBps) Copying: 900/1024 [MB] (22 MBps) Copying: 923/1024 [MB] (22 MBps) Copying: 946/1024 [MB] (23 MBps) Copying: 970/1024 [MB] (23 MBps) Copying: 993/1024 [MB] (23 MBps) Copying: 1017/1024 [MB] (23 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-05-12 05:03:00.009396] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.045 [2024-05-12 05:03:00.009470] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:53.045 [2024-05-12 05:03:00.009493] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:53.045 [2024-05-12 05:03:00.009512] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.045 [2024-05-12 05:03:00.009557] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:53.045 [2024-05-12 05:03:00.012939] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.045 [2024-05-12 05:03:00.012972] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:53.045 [2024-05-12 05:03:00.013001] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.360 ms 00:22:53.045 [2024-05-12 05:03:00.013011] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.045 [2024-05-12 05:03:00.013293] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.045 [2024-05-12 05:03:00.013314] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:53.045 [2024-05-12 05:03:00.013326] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:22:53.045 [2024-05-12 05:03:00.013342] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.045 [2024-05-12 05:03:00.017959] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.045 [2024-05-12 05:03:00.018002] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:53.045 [2024-05-12 05:03:00.018020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.597 ms 00:22:53.045 [2024-05-12 05:03:00.018032] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.045 [2024-05-12 05:03:00.026017] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.045 [2024-05-12 05:03:00.026064] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:22:53.045 [2024-05-12 05:03:00.026094] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.944 ms 00:22:53.045 [2024-05-12 05:03:00.026104] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.045 [2024-05-12 05:03:00.053339] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.045 [2024-05-12 05:03:00.053375] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:53.045 [2024-05-12 05:03:00.053390] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.158 ms 00:22:53.045 [2024-05-12 05:03:00.053399] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.045 [2024-05-12 05:03:00.067923] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.045 [2024-05-12 05:03:00.067963] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:53.045 [2024-05-12 05:03:00.067977] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.488 ms 00:22:53.045 [2024-05-12 05:03:00.067986] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.306 [2024-05-12 05:03:00.186290] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.306 [2024-05-12 05:03:00.186332] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:53.306 [2024-05-12 05:03:00.186364] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 118.280 ms 00:22:53.306 [2024-05-12 05:03:00.186376] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.306 [2024-05-12 05:03:00.211518] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.306 [2024-05-12 05:03:00.211552] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:53.306 [2024-05-12 05:03:00.211582] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.122 ms 00:22:53.306 [2024-05-12 05:03:00.211591] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.306 [2024-05-12 05:03:00.237052] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.306 [2024-05-12 05:03:00.237087] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:53.306 [2024-05-12 05:03:00.237117] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.410 ms 00:22:53.306 [2024-05-12 05:03:00.237125] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.306 [2024-05-12 05:03:00.261597] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.306 [2024-05-12 05:03:00.261663] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:53.306 [2024-05-12 05:03:00.261709] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.436 ms 00:22:53.306 [2024-05-12 05:03:00.261717] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.306 [2024-05-12 05:03:00.287502] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.306 [2024-05-12 05:03:00.287536] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:53.306 [2024-05-12 05:03:00.287566] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.713 ms 00:22:53.306 [2024-05-12 05:03:00.287575] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.306 [2024-05-12 05:03:00.287611] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:53.306 [2024-05-12 05:03:00.287630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133632 / 261120 wr_cnt: 1 state: open 00:22:53.306 [2024-05-12 05:03:00.287642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:53.306 [2024-05-12 05:03:00.287652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:53.306 [2024-05-12 05:03:00.287661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:53.306 [2024-05-12 05:03:00.287671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:53.306 [2024-05-12 05:03:00.287680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:53.306 [2024-05-12 05:03:00.287689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:53.306 [2024-05-12 05:03:00.287699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:53.306 [2024-05-12 05:03:00.287708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:53.306 [2024-05-12 05:03:00.287718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:53.306 [2024-05-12 05:03:00.287727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:53.306 [2024-05-12 05:03:00.287736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:53.306 [2024-05-12 05:03:00.287745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:53.306 [2024-05-12 05:03:00.287755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:53.306 [2024-05-12 05:03:00.287764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:53.306 [2024-05-12 05:03:00.287773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:53.306 [2024-05-12 05:03:00.287783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:53.306 [2024-05-12 05:03:00.287792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:53.306 [2024-05-12 05:03:00.287801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:53.306 [2024-05-12 05:03:00.287811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:53.306 [2024-05-12 05:03:00.287820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:53.306 [2024-05-12 05:03:00.287829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:53.306 [2024-05-12 05:03:00.287838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:53.306 [2024-05-12 05:03:00.287847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:53.306 [2024-05-12 05:03:00.287857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:53.306 [2024-05-12 05:03:00.287866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:53.306 [2024-05-12 05:03:00.287877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:53.306 [2024-05-12 05:03:00.287886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:53.306 [2024-05-12 05:03:00.287896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:53.306 [2024-05-12 05:03:00.287905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:53.306 [2024-05-12 05:03:00.287915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:53.306 [2024-05-12 05:03:00.287924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:53.306 [2024-05-12 05:03:00.287933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:53.306 [2024-05-12 05:03:00.287942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:53.306 [2024-05-12 05:03:00.287951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:53.306 [2024-05-12 05:03:00.287961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:53.306 [2024-05-12 05:03:00.287970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:53.306 [2024-05-12 05:03:00.287979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:53.306 [2024-05-12 05:03:00.287988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:53.306 [2024-05-12 05:03:00.287998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:53.306 [2024-05-12 05:03:00.288007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:53.306 [2024-05-12 05:03:00.288016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:53.306 [2024-05-12 05:03:00.288025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:53.306 [2024-05-12 05:03:00.288034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:53.306 [2024-05-12 05:03:00.288043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:53.306 [2024-05-12 05:03:00.288053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:53.307 [2024-05-12 05:03:00.288770] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:53.307 [2024-05-12 05:03:00.288780] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 76b9a96f-c2ae-494a-8e67-433e8d3249a5 00:22:53.307 [2024-05-12 05:03:00.288790] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133632 00:22:53.307 [2024-05-12 05:03:00.288799] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 17600 00:22:53.307 [2024-05-12 05:03:00.288808] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 16640 00:22:53.307 [2024-05-12 05:03:00.288818] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0577 00:22:53.307 [2024-05-12 05:03:00.288828] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:53.307 [2024-05-12 05:03:00.288844] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:53.307 [2024-05-12 05:03:00.288853] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:53.307 [2024-05-12 05:03:00.288862] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:53.307 [2024-05-12 05:03:00.288870] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:53.307 [2024-05-12 05:03:00.288880] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.307 [2024-05-12 05:03:00.288890] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:53.307 [2024-05-12 05:03:00.288899] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.270 ms 00:22:53.307 [2024-05-12 05:03:00.288910] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.307 [2024-05-12 05:03:00.302433] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.307 [2024-05-12 05:03:00.302464] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:53.307 [2024-05-12 05:03:00.302493] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.478 ms 00:22:53.307 [2024-05-12 05:03:00.302509] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.307 [2024-05-12 05:03:00.302721] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.307 [2024-05-12 05:03:00.302735] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:53.307 [2024-05-12 05:03:00.302745] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.176 ms 00:22:53.307 [2024-05-12 05:03:00.302753] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.307 [2024-05-12 05:03:00.338852] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.307 [2024-05-12 05:03:00.338896] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:53.307 [2024-05-12 05:03:00.338926] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.307 [2024-05-12 05:03:00.338936] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.307 [2024-05-12 05:03:00.338988] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.307 [2024-05-12 05:03:00.339001] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:53.307 [2024-05-12 05:03:00.339011] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.307 [2024-05-12 05:03:00.339020] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.307 [2024-05-12 05:03:00.339099] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.307 [2024-05-12 05:03:00.339115] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:53.307 [2024-05-12 05:03:00.339131] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.307 [2024-05-12 05:03:00.339140] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.307 [2024-05-12 05:03:00.339159] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.307 [2024-05-12 05:03:00.339169] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:53.307 [2024-05-12 05:03:00.339178] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.307 [2024-05-12 05:03:00.339196] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.307 [2024-05-12 05:03:00.415470] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.307 [2024-05-12 05:03:00.415521] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:53.307 [2024-05-12 05:03:00.415542] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.307 [2024-05-12 05:03:00.415552] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.567 [2024-05-12 05:03:00.448220] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.567 [2024-05-12 05:03:00.448275] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:53.567 [2024-05-12 05:03:00.448293] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.567 [2024-05-12 05:03:00.448305] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.567 [2024-05-12 05:03:00.448395] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.567 [2024-05-12 05:03:00.448413] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:53.567 [2024-05-12 05:03:00.448426] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.567 [2024-05-12 05:03:00.448445] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.567 [2024-05-12 05:03:00.448544] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.567 [2024-05-12 05:03:00.448573] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:53.567 [2024-05-12 05:03:00.448599] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.567 [2024-05-12 05:03:00.448609] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.567 [2024-05-12 05:03:00.448716] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.567 [2024-05-12 05:03:00.448734] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:53.567 [2024-05-12 05:03:00.448745] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.567 [2024-05-12 05:03:00.448755] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.567 [2024-05-12 05:03:00.448804] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.567 [2024-05-12 05:03:00.448825] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:53.567 [2024-05-12 05:03:00.448837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.567 [2024-05-12 05:03:00.448847] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.567 [2024-05-12 05:03:00.448886] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.567 [2024-05-12 05:03:00.448900] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:53.567 [2024-05-12 05:03:00.448910] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.567 [2024-05-12 05:03:00.448920] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.567 [2024-05-12 05:03:00.448972] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.567 [2024-05-12 05:03:00.448986] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:53.567 [2024-05-12 05:03:00.448996] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.567 [2024-05-12 05:03:00.449008] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.567 [2024-05-12 05:03:00.449132] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 439.710 ms, result 0 00:22:54.505 00:22:54.505 00:22:54.505 05:03:01 -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:56.409 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:56.409 05:03:03 -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:22:56.409 05:03:03 -- ftl/restore.sh@85 -- # restore_kill 00:22:56.409 05:03:03 -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:56.409 05:03:03 -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:56.409 05:03:03 -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:56.409 Process with pid 74094 is not found 00:22:56.409 Remove shared memory files 00:22:56.409 05:03:03 -- ftl/restore.sh@32 -- # killprocess 74094 00:22:56.409 05:03:03 -- common/autotest_common.sh@926 -- # '[' -z 74094 ']' 00:22:56.409 05:03:03 -- common/autotest_common.sh@930 -- # kill -0 74094 00:22:56.409 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (74094) - No such process 00:22:56.409 05:03:03 -- common/autotest_common.sh@953 -- # echo 'Process with pid 74094 is not found' 00:22:56.409 05:03:03 -- ftl/restore.sh@33 -- # remove_shm 00:22:56.409 05:03:03 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:56.409 05:03:03 -- ftl/common.sh@205 -- # rm -f rm -f 00:22:56.409 05:03:03 -- ftl/common.sh@206 -- # rm -f rm -f 00:22:56.409 05:03:03 -- ftl/common.sh@207 -- # rm -f rm -f 00:22:56.409 05:03:03 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:56.409 05:03:03 -- ftl/common.sh@209 -- # rm -f rm -f 00:22:56.409 ************************************ 00:22:56.409 END TEST ftl_restore 00:22:56.409 ************************************ 00:22:56.409 00:22:56.409 real 3m30.226s 00:22:56.409 user 3m16.586s 00:22:56.409 sys 0m15.070s 00:22:56.409 05:03:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:56.409 05:03:03 -- common/autotest_common.sh@10 -- # set +x 00:22:56.409 05:03:03 -- ftl/ftl.sh@78 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:06.0 0000:00:07.0 00:22:56.409 05:03:03 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:22:56.409 05:03:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:56.409 05:03:03 -- common/autotest_common.sh@10 -- # set +x 00:22:56.409 ************************************ 00:22:56.409 START TEST ftl_dirty_shutdown 00:22:56.409 ************************************ 00:22:56.409 05:03:03 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:06.0 0000:00:07.0 00:22:56.409 * Looking for test storage... 00:22:56.409 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:56.409 05:03:03 -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:56.409 05:03:03 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:22:56.409 05:03:03 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:56.409 05:03:03 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:56.409 05:03:03 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:56.409 05:03:03 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:56.409 05:03:03 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:56.409 05:03:03 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:56.409 05:03:03 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:56.409 05:03:03 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:56.409 05:03:03 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:56.410 05:03:03 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:56.410 05:03:03 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:56.410 05:03:03 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:56.410 05:03:03 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:56.410 05:03:03 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:56.410 05:03:03 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:56.410 05:03:03 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:56.410 05:03:03 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:56.410 05:03:03 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:56.410 05:03:03 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:56.410 05:03:03 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:56.410 05:03:03 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:56.410 05:03:03 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:56.410 05:03:03 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:56.410 05:03:03 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:56.410 05:03:03 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:56.410 05:03:03 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:56.410 05:03:03 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:56.410 05:03:03 -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:56.410 05:03:03 -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:56.410 05:03:03 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:56.410 05:03:03 -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:22:56.410 05:03:03 -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:06.0 00:22:56.410 05:03:03 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:56.410 05:03:03 -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:22:56.410 05:03:03 -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:07.0 00:22:56.410 05:03:03 -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:22:56.410 05:03:03 -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:22:56.410 05:03:03 -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:22:56.410 05:03:03 -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:22:56.410 05:03:03 -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:56.410 05:03:03 -- ftl/dirty_shutdown.sh@45 -- # svcpid=76335 00:22:56.410 05:03:03 -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:56.410 05:03:03 -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 76335 00:22:56.410 05:03:03 -- common/autotest_common.sh@819 -- # '[' -z 76335 ']' 00:22:56.410 05:03:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:56.410 05:03:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:56.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:56.410 05:03:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:56.410 05:03:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:56.410 05:03:03 -- common/autotest_common.sh@10 -- # set +x 00:22:56.669 [2024-05-12 05:03:03.632582] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:56.669 [2024-05-12 05:03:03.632747] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76335 ] 00:22:56.928 [2024-05-12 05:03:03.800512] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.928 [2024-05-12 05:03:03.978440] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:56.928 [2024-05-12 05:03:03.978661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:58.306 05:03:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:58.306 05:03:05 -- common/autotest_common.sh@852 -- # return 0 00:22:58.306 05:03:05 -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:22:58.306 05:03:05 -- ftl/common.sh@54 -- # local name=nvme0 00:22:58.306 05:03:05 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:22:58.306 05:03:05 -- ftl/common.sh@56 -- # local size=103424 00:22:58.306 05:03:05 -- ftl/common.sh@59 -- # local base_bdev 00:22:58.306 05:03:05 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:22:58.565 05:03:05 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:58.565 05:03:05 -- ftl/common.sh@62 -- # local base_size 00:22:58.565 05:03:05 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:58.565 05:03:05 -- common/autotest_common.sh@1357 -- # local bdev_name=nvme0n1 00:22:58.565 05:03:05 -- common/autotest_common.sh@1358 -- # local bdev_info 00:22:58.565 05:03:05 -- common/autotest_common.sh@1359 -- # local bs 00:22:58.565 05:03:05 -- common/autotest_common.sh@1360 -- # local nb 00:22:58.565 05:03:05 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:58.824 05:03:05 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:22:58.824 { 00:22:58.824 "name": "nvme0n1", 00:22:58.824 "aliases": [ 00:22:58.824 "144bd5c3-b40a-44a8-961b-db00c694b9e6" 00:22:58.824 ], 00:22:58.824 "product_name": "NVMe disk", 00:22:58.824 "block_size": 4096, 00:22:58.824 "num_blocks": 1310720, 00:22:58.824 "uuid": "144bd5c3-b40a-44a8-961b-db00c694b9e6", 00:22:58.824 "assigned_rate_limits": { 00:22:58.824 "rw_ios_per_sec": 0, 00:22:58.824 "rw_mbytes_per_sec": 0, 00:22:58.824 "r_mbytes_per_sec": 0, 00:22:58.824 "w_mbytes_per_sec": 0 00:22:58.824 }, 00:22:58.824 "claimed": true, 00:22:58.824 "claim_type": "read_many_write_one", 00:22:58.824 "zoned": false, 00:22:58.824 "supported_io_types": { 00:22:58.824 "read": true, 00:22:58.824 "write": true, 00:22:58.824 "unmap": true, 00:22:58.824 "write_zeroes": true, 00:22:58.824 "flush": true, 00:22:58.824 "reset": true, 00:22:58.824 "compare": true, 00:22:58.824 "compare_and_write": false, 00:22:58.824 "abort": true, 00:22:58.824 "nvme_admin": true, 00:22:58.824 "nvme_io": true 00:22:58.824 }, 00:22:58.824 "driver_specific": { 00:22:58.824 "nvme": [ 00:22:58.824 { 00:22:58.824 "pci_address": "0000:00:07.0", 00:22:58.824 "trid": { 00:22:58.824 "trtype": "PCIe", 00:22:58.824 "traddr": "0000:00:07.0" 00:22:58.824 }, 00:22:58.824 "ctrlr_data": { 00:22:58.824 "cntlid": 0, 00:22:58.824 "vendor_id": "0x1b36", 00:22:58.824 "model_number": "QEMU NVMe Ctrl", 00:22:58.824 "serial_number": "12341", 00:22:58.824 "firmware_revision": "8.0.0", 00:22:58.824 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:58.824 "oacs": { 00:22:58.824 "security": 0, 00:22:58.824 "format": 1, 00:22:58.824 "firmware": 0, 00:22:58.824 "ns_manage": 1 00:22:58.824 }, 00:22:58.824 "multi_ctrlr": false, 00:22:58.824 "ana_reporting": false 00:22:58.824 }, 00:22:58.824 "vs": { 00:22:58.824 "nvme_version": "1.4" 00:22:58.824 }, 00:22:58.824 "ns_data": { 00:22:58.824 "id": 1, 00:22:58.824 "can_share": false 00:22:58.824 } 00:22:58.824 } 00:22:58.824 ], 00:22:58.824 "mp_policy": "active_passive" 00:22:58.824 } 00:22:58.824 } 00:22:58.824 ]' 00:22:58.824 05:03:05 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:22:58.824 05:03:05 -- common/autotest_common.sh@1362 -- # bs=4096 00:22:58.824 05:03:05 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:22:58.824 05:03:05 -- common/autotest_common.sh@1363 -- # nb=1310720 00:22:58.824 05:03:05 -- common/autotest_common.sh@1366 -- # bdev_size=5120 00:22:58.824 05:03:05 -- common/autotest_common.sh@1367 -- # echo 5120 00:22:58.824 05:03:05 -- ftl/common.sh@63 -- # base_size=5120 00:22:58.824 05:03:05 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:58.824 05:03:05 -- ftl/common.sh@67 -- # clear_lvols 00:22:58.824 05:03:05 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:58.824 05:03:05 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:59.084 05:03:06 -- ftl/common.sh@28 -- # stores=78a4fee3-1fa1-4264-bfdd-452bff401b0f 00:22:59.084 05:03:06 -- ftl/common.sh@29 -- # for lvs in $stores 00:22:59.084 05:03:06 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 78a4fee3-1fa1-4264-bfdd-452bff401b0f 00:22:59.342 05:03:06 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:59.602 05:03:06 -- ftl/common.sh@68 -- # lvs=b9a95135-b649-4e6f-9917-6d2d35eaeba5 00:22:59.602 05:03:06 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b9a95135-b649-4e6f-9917-6d2d35eaeba5 00:22:59.602 05:03:06 -- ftl/dirty_shutdown.sh@49 -- # split_bdev=b7d910b9-47d9-4aed-9cb7-bcc1c4fdafb4 00:22:59.602 05:03:06 -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:06.0 ']' 00:22:59.602 05:03:06 -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:06.0 b7d910b9-47d9-4aed-9cb7-bcc1c4fdafb4 00:22:59.602 05:03:06 -- ftl/common.sh@35 -- # local name=nvc0 00:22:59.602 05:03:06 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:22:59.602 05:03:06 -- ftl/common.sh@37 -- # local base_bdev=b7d910b9-47d9-4aed-9cb7-bcc1c4fdafb4 00:22:59.602 05:03:06 -- ftl/common.sh@38 -- # local cache_size= 00:22:59.602 05:03:06 -- ftl/common.sh@41 -- # get_bdev_size b7d910b9-47d9-4aed-9cb7-bcc1c4fdafb4 00:22:59.602 05:03:06 -- common/autotest_common.sh@1357 -- # local bdev_name=b7d910b9-47d9-4aed-9cb7-bcc1c4fdafb4 00:22:59.602 05:03:06 -- common/autotest_common.sh@1358 -- # local bdev_info 00:22:59.602 05:03:06 -- common/autotest_common.sh@1359 -- # local bs 00:22:59.602 05:03:06 -- common/autotest_common.sh@1360 -- # local nb 00:22:59.602 05:03:06 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b7d910b9-47d9-4aed-9cb7-bcc1c4fdafb4 00:22:59.861 05:03:06 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:22:59.861 { 00:22:59.861 "name": "b7d910b9-47d9-4aed-9cb7-bcc1c4fdafb4", 00:22:59.861 "aliases": [ 00:22:59.861 "lvs/nvme0n1p0" 00:22:59.861 ], 00:22:59.861 "product_name": "Logical Volume", 00:22:59.861 "block_size": 4096, 00:22:59.861 "num_blocks": 26476544, 00:22:59.861 "uuid": "b7d910b9-47d9-4aed-9cb7-bcc1c4fdafb4", 00:22:59.861 "assigned_rate_limits": { 00:22:59.861 "rw_ios_per_sec": 0, 00:22:59.861 "rw_mbytes_per_sec": 0, 00:22:59.861 "r_mbytes_per_sec": 0, 00:22:59.861 "w_mbytes_per_sec": 0 00:22:59.861 }, 00:22:59.861 "claimed": false, 00:22:59.861 "zoned": false, 00:22:59.861 "supported_io_types": { 00:22:59.861 "read": true, 00:22:59.861 "write": true, 00:22:59.861 "unmap": true, 00:22:59.861 "write_zeroes": true, 00:22:59.861 "flush": false, 00:22:59.861 "reset": true, 00:22:59.861 "compare": false, 00:22:59.861 "compare_and_write": false, 00:22:59.861 "abort": false, 00:22:59.861 "nvme_admin": false, 00:22:59.861 "nvme_io": false 00:22:59.861 }, 00:22:59.861 "driver_specific": { 00:22:59.861 "lvol": { 00:22:59.861 "lvol_store_uuid": "b9a95135-b649-4e6f-9917-6d2d35eaeba5", 00:22:59.861 "base_bdev": "nvme0n1", 00:22:59.861 "thin_provision": true, 00:22:59.861 "snapshot": false, 00:22:59.861 "clone": false, 00:22:59.861 "esnap_clone": false 00:22:59.861 } 00:22:59.861 } 00:22:59.861 } 00:22:59.861 ]' 00:22:59.861 05:03:06 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:22:59.861 05:03:06 -- common/autotest_common.sh@1362 -- # bs=4096 00:22:59.861 05:03:06 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:22:59.861 05:03:06 -- common/autotest_common.sh@1363 -- # nb=26476544 00:22:59.861 05:03:06 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:22:59.861 05:03:06 -- common/autotest_common.sh@1367 -- # echo 103424 00:22:59.861 05:03:06 -- ftl/common.sh@41 -- # local base_size=5171 00:22:59.861 05:03:06 -- ftl/common.sh@44 -- # local nvc_bdev 00:22:59.861 05:03:06 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:23:00.429 05:03:07 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:00.429 05:03:07 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:00.429 05:03:07 -- ftl/common.sh@48 -- # get_bdev_size b7d910b9-47d9-4aed-9cb7-bcc1c4fdafb4 00:23:00.429 05:03:07 -- common/autotest_common.sh@1357 -- # local bdev_name=b7d910b9-47d9-4aed-9cb7-bcc1c4fdafb4 00:23:00.429 05:03:07 -- common/autotest_common.sh@1358 -- # local bdev_info 00:23:00.429 05:03:07 -- common/autotest_common.sh@1359 -- # local bs 00:23:00.429 05:03:07 -- common/autotest_common.sh@1360 -- # local nb 00:23:00.429 05:03:07 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b7d910b9-47d9-4aed-9cb7-bcc1c4fdafb4 00:23:00.429 05:03:07 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:23:00.429 { 00:23:00.429 "name": "b7d910b9-47d9-4aed-9cb7-bcc1c4fdafb4", 00:23:00.429 "aliases": [ 00:23:00.429 "lvs/nvme0n1p0" 00:23:00.429 ], 00:23:00.429 "product_name": "Logical Volume", 00:23:00.429 "block_size": 4096, 00:23:00.429 "num_blocks": 26476544, 00:23:00.429 "uuid": "b7d910b9-47d9-4aed-9cb7-bcc1c4fdafb4", 00:23:00.429 "assigned_rate_limits": { 00:23:00.429 "rw_ios_per_sec": 0, 00:23:00.429 "rw_mbytes_per_sec": 0, 00:23:00.429 "r_mbytes_per_sec": 0, 00:23:00.429 "w_mbytes_per_sec": 0 00:23:00.429 }, 00:23:00.429 "claimed": false, 00:23:00.429 "zoned": false, 00:23:00.429 "supported_io_types": { 00:23:00.429 "read": true, 00:23:00.429 "write": true, 00:23:00.429 "unmap": true, 00:23:00.429 "write_zeroes": true, 00:23:00.429 "flush": false, 00:23:00.429 "reset": true, 00:23:00.429 "compare": false, 00:23:00.429 "compare_and_write": false, 00:23:00.429 "abort": false, 00:23:00.429 "nvme_admin": false, 00:23:00.429 "nvme_io": false 00:23:00.429 }, 00:23:00.429 "driver_specific": { 00:23:00.429 "lvol": { 00:23:00.429 "lvol_store_uuid": "b9a95135-b649-4e6f-9917-6d2d35eaeba5", 00:23:00.429 "base_bdev": "nvme0n1", 00:23:00.429 "thin_provision": true, 00:23:00.429 "snapshot": false, 00:23:00.429 "clone": false, 00:23:00.429 "esnap_clone": false 00:23:00.429 } 00:23:00.429 } 00:23:00.429 } 00:23:00.429 ]' 00:23:00.429 05:03:07 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:23:00.429 05:03:07 -- common/autotest_common.sh@1362 -- # bs=4096 00:23:00.429 05:03:07 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:23:00.429 05:03:07 -- common/autotest_common.sh@1363 -- # nb=26476544 00:23:00.429 05:03:07 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:23:00.429 05:03:07 -- common/autotest_common.sh@1367 -- # echo 103424 00:23:00.429 05:03:07 -- ftl/common.sh@48 -- # cache_size=5171 00:23:00.429 05:03:07 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:00.688 05:03:07 -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:23:00.688 05:03:07 -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size b7d910b9-47d9-4aed-9cb7-bcc1c4fdafb4 00:23:00.688 05:03:07 -- common/autotest_common.sh@1357 -- # local bdev_name=b7d910b9-47d9-4aed-9cb7-bcc1c4fdafb4 00:23:00.688 05:03:07 -- common/autotest_common.sh@1358 -- # local bdev_info 00:23:00.688 05:03:07 -- common/autotest_common.sh@1359 -- # local bs 00:23:00.688 05:03:07 -- common/autotest_common.sh@1360 -- # local nb 00:23:00.688 05:03:07 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b7d910b9-47d9-4aed-9cb7-bcc1c4fdafb4 00:23:00.948 05:03:07 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:23:00.948 { 00:23:00.948 "name": "b7d910b9-47d9-4aed-9cb7-bcc1c4fdafb4", 00:23:00.948 "aliases": [ 00:23:00.948 "lvs/nvme0n1p0" 00:23:00.948 ], 00:23:00.948 "product_name": "Logical Volume", 00:23:00.948 "block_size": 4096, 00:23:00.948 "num_blocks": 26476544, 00:23:00.948 "uuid": "b7d910b9-47d9-4aed-9cb7-bcc1c4fdafb4", 00:23:00.948 "assigned_rate_limits": { 00:23:00.948 "rw_ios_per_sec": 0, 00:23:00.948 "rw_mbytes_per_sec": 0, 00:23:00.948 "r_mbytes_per_sec": 0, 00:23:00.948 "w_mbytes_per_sec": 0 00:23:00.948 }, 00:23:00.948 "claimed": false, 00:23:00.948 "zoned": false, 00:23:00.948 "supported_io_types": { 00:23:00.948 "read": true, 00:23:00.948 "write": true, 00:23:00.948 "unmap": true, 00:23:00.948 "write_zeroes": true, 00:23:00.948 "flush": false, 00:23:00.948 "reset": true, 00:23:00.948 "compare": false, 00:23:00.948 "compare_and_write": false, 00:23:00.948 "abort": false, 00:23:00.948 "nvme_admin": false, 00:23:00.948 "nvme_io": false 00:23:00.948 }, 00:23:00.948 "driver_specific": { 00:23:00.948 "lvol": { 00:23:00.948 "lvol_store_uuid": "b9a95135-b649-4e6f-9917-6d2d35eaeba5", 00:23:00.948 "base_bdev": "nvme0n1", 00:23:00.948 "thin_provision": true, 00:23:00.948 "snapshot": false, 00:23:00.948 "clone": false, 00:23:00.948 "esnap_clone": false 00:23:00.948 } 00:23:00.948 } 00:23:00.948 } 00:23:00.948 ]' 00:23:00.948 05:03:07 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:23:00.948 05:03:08 -- common/autotest_common.sh@1362 -- # bs=4096 00:23:00.948 05:03:08 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:23:00.948 05:03:08 -- common/autotest_common.sh@1363 -- # nb=26476544 00:23:00.948 05:03:08 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:23:00.948 05:03:08 -- common/autotest_common.sh@1367 -- # echo 103424 00:23:00.948 05:03:08 -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:23:00.948 05:03:08 -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d b7d910b9-47d9-4aed-9cb7-bcc1c4fdafb4 --l2p_dram_limit 10' 00:23:00.948 05:03:08 -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:23:00.948 05:03:08 -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:06.0 ']' 00:23:00.948 05:03:08 -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:23:00.948 05:03:08 -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b7d910b9-47d9-4aed-9cb7-bcc1c4fdafb4 --l2p_dram_limit 10 -c nvc0n1p0 00:23:01.209 [2024-05-12 05:03:08.285074] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.209 [2024-05-12 05:03:08.285123] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:01.209 [2024-05-12 05:03:08.285143] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:01.209 [2024-05-12 05:03:08.285162] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.209 [2024-05-12 05:03:08.285268] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.209 [2024-05-12 05:03:08.285286] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:01.209 [2024-05-12 05:03:08.285300] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:23:01.209 [2024-05-12 05:03:08.285311] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.209 [2024-05-12 05:03:08.285356] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:01.209 [2024-05-12 05:03:08.286314] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:01.209 [2024-05-12 05:03:08.286350] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.209 [2024-05-12 05:03:08.286362] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:01.209 [2024-05-12 05:03:08.286375] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.014 ms 00:23:01.209 [2024-05-12 05:03:08.286386] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.209 [2024-05-12 05:03:08.286509] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 9c535803-3c77-4cb2-a504-54d4ad3e3ace 00:23:01.209 [2024-05-12 05:03:08.287338] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.209 [2024-05-12 05:03:08.287367] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:01.209 [2024-05-12 05:03:08.287381] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:23:01.209 [2024-05-12 05:03:08.287392] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.209 [2024-05-12 05:03:08.291306] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.209 [2024-05-12 05:03:08.291348] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:01.209 [2024-05-12 05:03:08.291364] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.871 ms 00:23:01.209 [2024-05-12 05:03:08.291376] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.209 [2024-05-12 05:03:08.291487] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.209 [2024-05-12 05:03:08.291508] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:01.209 [2024-05-12 05:03:08.291519] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:23:01.209 [2024-05-12 05:03:08.291534] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.209 [2024-05-12 05:03:08.291593] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.209 [2024-05-12 05:03:08.291612] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:01.209 [2024-05-12 05:03:08.291624] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:01.209 [2024-05-12 05:03:08.291635] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.209 [2024-05-12 05:03:08.291667] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:01.209 [2024-05-12 05:03:08.295408] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.209 [2024-05-12 05:03:08.295443] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:01.209 [2024-05-12 05:03:08.295459] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.748 ms 00:23:01.209 [2024-05-12 05:03:08.295469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.209 [2024-05-12 05:03:08.295509] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.209 [2024-05-12 05:03:08.295538] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:01.209 [2024-05-12 05:03:08.295550] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:01.209 [2024-05-12 05:03:08.295561] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.209 [2024-05-12 05:03:08.295602] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:01.209 [2024-05-12 05:03:08.295714] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:23:01.209 [2024-05-12 05:03:08.295735] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:01.209 [2024-05-12 05:03:08.295749] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:23:01.209 [2024-05-12 05:03:08.295764] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:01.209 [2024-05-12 05:03:08.295776] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:01.209 [2024-05-12 05:03:08.295789] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:01.209 [2024-05-12 05:03:08.295799] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:01.209 [2024-05-12 05:03:08.295810] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:23:01.209 [2024-05-12 05:03:08.295823] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:23:01.209 [2024-05-12 05:03:08.295836] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.209 [2024-05-12 05:03:08.295846] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:01.209 [2024-05-12 05:03:08.295884] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:23:01.209 [2024-05-12 05:03:08.295894] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.209 [2024-05-12 05:03:08.295954] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.209 [2024-05-12 05:03:08.295966] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:01.209 [2024-05-12 05:03:08.295979] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:23:01.209 [2024-05-12 05:03:08.295988] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.209 [2024-05-12 05:03:08.296057] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:01.209 [2024-05-12 05:03:08.296072] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:01.209 [2024-05-12 05:03:08.296085] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:01.209 [2024-05-12 05:03:08.296095] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:01.209 [2024-05-12 05:03:08.296106] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:01.209 [2024-05-12 05:03:08.296115] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:01.209 [2024-05-12 05:03:08.296125] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:01.209 [2024-05-12 05:03:08.296134] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:01.209 [2024-05-12 05:03:08.296145] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:01.209 [2024-05-12 05:03:08.296154] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:01.209 [2024-05-12 05:03:08.296164] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:01.209 [2024-05-12 05:03:08.296218] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:01.209 [2024-05-12 05:03:08.296234] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:01.209 [2024-05-12 05:03:08.296269] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:01.209 [2024-05-12 05:03:08.296303] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:23:01.209 [2024-05-12 05:03:08.296314] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:01.209 [2024-05-12 05:03:08.296331] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:01.209 [2024-05-12 05:03:08.296343] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:23:01.209 [2024-05-12 05:03:08.296355] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:01.209 [2024-05-12 05:03:08.296365] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:23:01.209 [2024-05-12 05:03:08.296378] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:23:01.209 [2024-05-12 05:03:08.296389] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:23:01.209 [2024-05-12 05:03:08.296401] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:01.209 [2024-05-12 05:03:08.296412] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:01.209 [2024-05-12 05:03:08.296424] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:01.209 [2024-05-12 05:03:08.296435] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:01.209 [2024-05-12 05:03:08.296447] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:23:01.209 [2024-05-12 05:03:08.296458] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:01.209 [2024-05-12 05:03:08.296470] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:01.209 [2024-05-12 05:03:08.296480] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:01.209 [2024-05-12 05:03:08.296507] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:01.209 [2024-05-12 05:03:08.296532] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:01.209 [2024-05-12 05:03:08.296566] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:23:01.209 [2024-05-12 05:03:08.296576] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:01.210 [2024-05-12 05:03:08.296588] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:01.210 [2024-05-12 05:03:08.296598] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:01.210 [2024-05-12 05:03:08.296624] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:01.210 [2024-05-12 05:03:08.296650] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:01.210 [2024-05-12 05:03:08.296679] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:23:01.210 [2024-05-12 05:03:08.296690] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:01.210 [2024-05-12 05:03:08.296701] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:01.210 [2024-05-12 05:03:08.296711] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:01.210 [2024-05-12 05:03:08.296724] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:01.210 [2024-05-12 05:03:08.296735] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:01.210 [2024-05-12 05:03:08.296748] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:01.210 [2024-05-12 05:03:08.296758] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:01.210 [2024-05-12 05:03:08.296770] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:01.210 [2024-05-12 05:03:08.296780] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:01.210 [2024-05-12 05:03:08.296793] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:01.210 [2024-05-12 05:03:08.296804] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:01.210 [2024-05-12 05:03:08.296818] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:01.210 [2024-05-12 05:03:08.296831] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:01.210 [2024-05-12 05:03:08.296845] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:01.210 [2024-05-12 05:03:08.296856] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:23:01.210 [2024-05-12 05:03:08.296868] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:23:01.210 [2024-05-12 05:03:08.296879] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:23:01.210 [2024-05-12 05:03:08.296891] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:23:01.210 [2024-05-12 05:03:08.296901] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:23:01.210 [2024-05-12 05:03:08.296914] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:23:01.210 [2024-05-12 05:03:08.296925] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:23:01.210 [2024-05-12 05:03:08.296937] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:23:01.210 [2024-05-12 05:03:08.296947] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:23:01.210 [2024-05-12 05:03:08.296960] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:23:01.210 [2024-05-12 05:03:08.296971] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:23:01.210 [2024-05-12 05:03:08.296987] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:23:01.210 [2024-05-12 05:03:08.296998] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:01.210 [2024-05-12 05:03:08.297015] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:01.210 [2024-05-12 05:03:08.297026] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:01.210 [2024-05-12 05:03:08.297039] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:01.210 [2024-05-12 05:03:08.297051] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:01.210 [2024-05-12 05:03:08.297063] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:01.210 [2024-05-12 05:03:08.297075] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.210 [2024-05-12 05:03:08.297088] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:01.210 [2024-05-12 05:03:08.297100] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.057 ms 00:23:01.210 [2024-05-12 05:03:08.297112] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.210 [2024-05-12 05:03:08.311887] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.210 [2024-05-12 05:03:08.311943] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:01.210 [2024-05-12 05:03:08.311960] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.714 ms 00:23:01.210 [2024-05-12 05:03:08.311972] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.210 [2024-05-12 05:03:08.312058] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.210 [2024-05-12 05:03:08.312080] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:01.210 [2024-05-12 05:03:08.312091] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:23:01.210 [2024-05-12 05:03:08.312103] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.469 [2024-05-12 05:03:08.343287] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.469 [2024-05-12 05:03:08.343348] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:01.469 [2024-05-12 05:03:08.343364] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.133 ms 00:23:01.469 [2024-05-12 05:03:08.343377] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.469 [2024-05-12 05:03:08.343417] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.469 [2024-05-12 05:03:08.343436] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:01.469 [2024-05-12 05:03:08.343447] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:01.469 [2024-05-12 05:03:08.343458] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.469 [2024-05-12 05:03:08.343796] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.469 [2024-05-12 05:03:08.343815] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:01.469 [2024-05-12 05:03:08.343828] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:23:01.469 [2024-05-12 05:03:08.343839] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.469 [2024-05-12 05:03:08.343949] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.469 [2024-05-12 05:03:08.343969] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:01.469 [2024-05-12 05:03:08.343980] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:23:01.469 [2024-05-12 05:03:08.343991] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.469 [2024-05-12 05:03:08.358563] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.469 [2024-05-12 05:03:08.358601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:01.469 [2024-05-12 05:03:08.358617] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.551 ms 00:23:01.469 [2024-05-12 05:03:08.358629] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.469 [2024-05-12 05:03:08.369110] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:01.469 [2024-05-12 05:03:08.371552] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.469 [2024-05-12 05:03:08.371588] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:01.469 [2024-05-12 05:03:08.371641] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.832 ms 00:23:01.470 [2024-05-12 05:03:08.371651] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.470 [2024-05-12 05:03:08.481730] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.470 [2024-05-12 05:03:08.481790] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:01.470 [2024-05-12 05:03:08.481814] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 110.043 ms 00:23:01.470 [2024-05-12 05:03:08.481824] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.470 [2024-05-12 05:03:08.481878] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:23:01.470 [2024-05-12 05:03:08.481895] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:23:05.656 [2024-05-12 05:03:12.160414] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.656 [2024-05-12 05:03:12.160477] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:05.656 [2024-05-12 05:03:12.160529] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3678.552 ms 00:23:05.656 [2024-05-12 05:03:12.160540] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.656 [2024-05-12 05:03:12.160725] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.656 [2024-05-12 05:03:12.160742] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:05.656 [2024-05-12 05:03:12.160756] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:23:05.656 [2024-05-12 05:03:12.160766] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.656 [2024-05-12 05:03:12.185533] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.656 [2024-05-12 05:03:12.185568] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:05.656 [2024-05-12 05:03:12.185618] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.711 ms 00:23:05.656 [2024-05-12 05:03:12.185629] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.656 [2024-05-12 05:03:12.209578] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.656 [2024-05-12 05:03:12.209613] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:05.656 [2024-05-12 05:03:12.209649] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.822 ms 00:23:05.656 [2024-05-12 05:03:12.209675] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.656 [2024-05-12 05:03:12.209986] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.656 [2024-05-12 05:03:12.210004] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:05.656 [2024-05-12 05:03:12.210017] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:23:05.656 [2024-05-12 05:03:12.210027] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.656 [2024-05-12 05:03:12.275923] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.656 [2024-05-12 05:03:12.275959] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:05.656 [2024-05-12 05:03:12.275977] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.856 ms 00:23:05.656 [2024-05-12 05:03:12.275988] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.656 [2024-05-12 05:03:12.300895] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.656 [2024-05-12 05:03:12.301061] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:05.656 [2024-05-12 05:03:12.301178] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.862 ms 00:23:05.656 [2024-05-12 05:03:12.301330] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.656 [2024-05-12 05:03:12.302991] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.656 [2024-05-12 05:03:12.303020] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:23:05.656 [2024-05-12 05:03:12.303054] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.602 ms 00:23:05.656 [2024-05-12 05:03:12.303064] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.656 [2024-05-12 05:03:12.327936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.656 [2024-05-12 05:03:12.328102] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:05.656 [2024-05-12 05:03:12.328236] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.824 ms 00:23:05.656 [2024-05-12 05:03:12.328287] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.656 [2024-05-12 05:03:12.328445] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.656 [2024-05-12 05:03:12.328499] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:05.656 [2024-05-12 05:03:12.328539] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:23:05.656 [2024-05-12 05:03:12.328672] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.656 [2024-05-12 05:03:12.328794] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.656 [2024-05-12 05:03:12.328922] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:05.656 [2024-05-12 05:03:12.329030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:23:05.656 [2024-05-12 05:03:12.329076] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.656 [2024-05-12 05:03:12.330300] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4044.715 ms, result 0 00:23:05.656 { 00:23:05.656 "name": "ftl0", 00:23:05.656 "uuid": "9c535803-3c77-4cb2-a504-54d4ad3e3ace" 00:23:05.656 } 00:23:05.656 05:03:12 -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:23:05.656 05:03:12 -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:05.656 05:03:12 -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:23:05.656 05:03:12 -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:23:05.656 05:03:12 -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:23:05.914 /dev/nbd0 00:23:05.914 05:03:12 -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:23:05.914 05:03:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:23:05.914 05:03:12 -- common/autotest_common.sh@857 -- # local i 00:23:05.914 05:03:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:05.914 05:03:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:05.914 05:03:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:23:05.914 05:03:12 -- common/autotest_common.sh@861 -- # break 00:23:05.914 05:03:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:05.914 05:03:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:05.914 05:03:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:23:05.914 1+0 records in 00:23:05.914 1+0 records out 00:23:05.914 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000316752 s, 12.9 MB/s 00:23:05.914 05:03:12 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:05.914 05:03:12 -- common/autotest_common.sh@874 -- # size=4096 00:23:05.914 05:03:12 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:05.914 05:03:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:05.914 05:03:12 -- common/autotest_common.sh@877 -- # return 0 00:23:05.914 05:03:12 -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:23:05.914 [2024-05-12 05:03:12.949472] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:05.914 [2024-05-12 05:03:12.949618] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76485 ] 00:23:06.173 [2024-05-12 05:03:13.110583] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.431 [2024-05-12 05:03:13.333576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.493  Copying: 210/1024 [MB] (210 MBps) Copying: 420/1024 [MB] (210 MBps) Copying: 634/1024 [MB] (213 MBps) Copying: 839/1024 [MB] (204 MBps) Copying: 1024/1024 [MB] (average 206 MBps) 00:23:12.493 00:23:12.493 05:03:19 -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:14.398 05:03:21 -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:23:14.398 [2024-05-12 05:03:21.458175] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:14.398 [2024-05-12 05:03:21.458314] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76580 ] 00:23:14.657 [2024-05-12 05:03:21.613609] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.916 [2024-05-12 05:03:21.803206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:23.227  Copying: 15/1024 [MB] (15 MBps) Copying: 30/1024 [MB] (15 MBps) Copying: 45/1024 [MB] (15 MBps) Copying: 60/1024 [MB] (15 MBps) Copying: 76/1024 [MB] (15 MBps) Copying: 91/1024 [MB] (15 MBps) Copying: 106/1024 [MB] (15 MBps) Copying: 122/1024 [MB] (15 MBps) Copying: 137/1024 [MB] (14 MBps) Copying: 152/1024 [MB] (15 MBps) Copying: 167/1024 [MB] (15 MBps) Copying: 183/1024 [MB] (15 MBps) Copying: 198/1024 [MB] (15 MBps) Copying: 213/1024 [MB] (15 MBps) Copying: 228/1024 [MB] (15 MBps) Copying: 244/1024 [MB] (15 MBps) Copying: 259/1024 [MB] (15 MBps) Copying: 274/1024 [MB] (15 MBps) Copying: 289/1024 [MB] (15 MBps) Copying: 305/1024 [MB] (15 MBps) Copying: 320/1024 [MB] (15 MBps) Copying: 335/1024 [MB] (15 MBps) Copying: 350/1024 [MB] (14 MBps) Copying: 365/1024 [MB] (15 MBps) Copying: 380/1024 [MB] (15 MBps) Copying: 396/1024 [MB] (15 MBps) Copying: 411/1024 [MB] (15 MBps) Copying: 426/1024 [MB] (15 MBps) Copying: 441/1024 [MB] (15 MBps) Copying: 456/1024 [MB] (14 MBps) Copying: 471/1024 [MB] (15 MBps) Copying: 487/1024 [MB] (15 MBps) Copying: 502/1024 [MB] (15 MBps) Copying: 517/1024 [MB] (15 MBps) Copying: 532/1024 [MB] (15 MBps) Copying: 547/1024 [MB] (15 MBps) Copying: 563/1024 [MB] (15 MBps) Copying: 578/1024 [MB] (14 MBps) Copying: 593/1024 [MB] (15 MBps) Copying: 608/1024 [MB] (15 MBps) Copying: 623/1024 [MB] (15 MBps) Copying: 638/1024 [MB] (15 MBps) Copying: 653/1024 [MB] (15 MBps) Copying: 669/1024 [MB] (15 MBps) Copying: 684/1024 [MB] (15 MBps) Copying: 700/1024 [MB] (15 MBps) Copying: 715/1024 [MB] (15 MBps) Copying: 730/1024 [MB] (14 MBps) Copying: 745/1024 [MB] (15 MBps) Copying: 760/1024 [MB] (15 MBps) Copying: 776/1024 [MB] (15 MBps) Copying: 791/1024 [MB] (15 MBps) Copying: 806/1024 [MB] (15 MBps) Copying: 822/1024 [MB] (15 MBps) Copying: 837/1024 [MB] (15 MBps) Copying: 852/1024 [MB] (15 MBps) Copying: 867/1024 [MB] (14 MBps) Copying: 882/1024 [MB] (15 MBps) Copying: 897/1024 [MB] (15 MBps) Copying: 913/1024 [MB] (15 MBps) Copying: 928/1024 [MB] (15 MBps) Copying: 943/1024 [MB] (15 MBps) Copying: 958/1024 [MB] (14 MBps) Copying: 974/1024 [MB] (15 MBps) Copying: 989/1024 [MB] (15 MBps) Copying: 1004/1024 [MB] (15 MBps) Copying: 1020/1024 [MB] (15 MBps) Copying: 1024/1024 [MB] (average 15 MBps) 00:24:23.227 00:24:23.227 05:04:30 -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:24:23.227 05:04:30 -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:24:23.486 05:04:30 -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:23.747 [2024-05-12 05:04:30.694820] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.747 [2024-05-12 05:04:30.694872] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:23.747 [2024-05-12 05:04:30.694890] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:23.747 [2024-05-12 05:04:30.694902] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.747 [2024-05-12 05:04:30.694943] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:23.747 [2024-05-12 05:04:30.697829] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.747 [2024-05-12 05:04:30.697858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:23.747 [2024-05-12 05:04:30.697874] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.863 ms 00:24:23.747 [2024-05-12 05:04:30.697884] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.747 [2024-05-12 05:04:30.699683] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.747 [2024-05-12 05:04:30.699718] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:23.747 [2024-05-12 05:04:30.699750] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.765 ms 00:24:23.747 [2024-05-12 05:04:30.699761] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.747 [2024-05-12 05:04:30.715283] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.747 [2024-05-12 05:04:30.715321] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:23.747 [2024-05-12 05:04:30.715355] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.497 ms 00:24:23.747 [2024-05-12 05:04:30.715366] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.747 [2024-05-12 05:04:30.720661] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.747 [2024-05-12 05:04:30.720689] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:24:23.747 [2024-05-12 05:04:30.720705] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.252 ms 00:24:23.747 [2024-05-12 05:04:30.720716] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.747 [2024-05-12 05:04:30.744920] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.748 [2024-05-12 05:04:30.744956] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:23.748 [2024-05-12 05:04:30.744973] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.122 ms 00:24:23.748 [2024-05-12 05:04:30.744983] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.748 [2024-05-12 05:04:30.760279] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.748 [2024-05-12 05:04:30.760314] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:23.748 [2024-05-12 05:04:30.760347] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.252 ms 00:24:23.748 [2024-05-12 05:04:30.760360] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.748 [2024-05-12 05:04:30.760584] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.748 [2024-05-12 05:04:30.760612] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:23.748 [2024-05-12 05:04:30.760628] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:24:23.748 [2024-05-12 05:04:30.760639] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.748 [2024-05-12 05:04:30.785308] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.748 [2024-05-12 05:04:30.785352] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:23.748 [2024-05-12 05:04:30.785371] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.640 ms 00:24:23.748 [2024-05-12 05:04:30.785380] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.748 [2024-05-12 05:04:30.809332] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.748 [2024-05-12 05:04:30.809366] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:23.748 [2024-05-12 05:04:30.809382] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.908 ms 00:24:23.748 [2024-05-12 05:04:30.809392] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.748 [2024-05-12 05:04:30.833047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.748 [2024-05-12 05:04:30.833080] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:23.748 [2024-05-12 05:04:30.833097] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.612 ms 00:24:23.748 [2024-05-12 05:04:30.833106] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.748 [2024-05-12 05:04:30.857017] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.748 [2024-05-12 05:04:30.857051] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:23.748 [2024-05-12 05:04:30.857068] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.828 ms 00:24:23.748 [2024-05-12 05:04:30.857078] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.748 [2024-05-12 05:04:30.857121] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:23.748 [2024-05-12 05:04:30.857140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.857987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.858001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.858013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.858028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.858039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.858052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.858064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.858076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:23.748 [2024-05-12 05:04:30.858088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:23.749 [2024-05-12 05:04:30.858102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:23.749 [2024-05-12 05:04:30.858113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:23.749 [2024-05-12 05:04:30.858126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:23.749 [2024-05-12 05:04:30.858137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:23.749 [2024-05-12 05:04:30.858150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:23.749 [2024-05-12 05:04:30.858161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:23.749 [2024-05-12 05:04:30.858174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:23.749 [2024-05-12 05:04:30.858185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:23.749 [2024-05-12 05:04:30.858198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:23.749 [2024-05-12 05:04:30.858208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:23.749 [2024-05-12 05:04:30.858223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:23.749 [2024-05-12 05:04:30.858233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:23.749 [2024-05-12 05:04:30.858246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:23.749 [2024-05-12 05:04:30.858257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:23.749 [2024-05-12 05:04:30.858270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:23.749 [2024-05-12 05:04:30.858281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:23.749 [2024-05-12 05:04:30.858294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:23.749 [2024-05-12 05:04:30.858315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:23.749 [2024-05-12 05:04:30.858331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:23.749 [2024-05-12 05:04:30.858343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:23.749 [2024-05-12 05:04:30.858356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:23.749 [2024-05-12 05:04:30.858367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:23.749 [2024-05-12 05:04:30.858379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:23.749 [2024-05-12 05:04:30.858390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:23.749 [2024-05-12 05:04:30.858403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:23.749 [2024-05-12 05:04:30.858418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:23.749 [2024-05-12 05:04:30.858434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:23.749 [2024-05-12 05:04:30.858446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:23.749 [2024-05-12 05:04:30.858459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:23.749 [2024-05-12 05:04:30.858469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:23.749 [2024-05-12 05:04:30.858482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:23.749 [2024-05-12 05:04:30.858503] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:23.749 [2024-05-12 05:04:30.858516] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9c535803-3c77-4cb2-a504-54d4ad3e3ace 00:24:23.749 [2024-05-12 05:04:30.858526] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:23.749 [2024-05-12 05:04:30.858539] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:23.749 [2024-05-12 05:04:30.858549] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:23.749 [2024-05-12 05:04:30.858564] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:23.749 [2024-05-12 05:04:30.858574] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:23.749 [2024-05-12 05:04:30.858586] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:23.749 [2024-05-12 05:04:30.858596] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:23.749 [2024-05-12 05:04:30.858607] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:23.749 [2024-05-12 05:04:30.858617] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:23.749 [2024-05-12 05:04:30.858631] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.749 [2024-05-12 05:04:30.858642] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:23.749 [2024-05-12 05:04:30.858655] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.513 ms 00:24:23.749 [2024-05-12 05:04:30.858665] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.009 [2024-05-12 05:04:30.873428] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.009 [2024-05-12 05:04:30.873466] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:24.009 [2024-05-12 05:04:30.873516] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.719 ms 00:24:24.009 [2024-05-12 05:04:30.873528] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.009 [2024-05-12 05:04:30.873785] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.009 [2024-05-12 05:04:30.873800] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:24.009 [2024-05-12 05:04:30.873813] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:24:24.009 [2024-05-12 05:04:30.873824] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.009 [2024-05-12 05:04:30.923133] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.009 [2024-05-12 05:04:30.923174] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:24.009 [2024-05-12 05:04:30.923190] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.009 [2024-05-12 05:04:30.923200] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.009 [2024-05-12 05:04:30.923313] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.009 [2024-05-12 05:04:30.923344] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:24.009 [2024-05-12 05:04:30.923357] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.009 [2024-05-12 05:04:30.923368] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.009 [2024-05-12 05:04:30.923457] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.009 [2024-05-12 05:04:30.923475] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:24.009 [2024-05-12 05:04:30.923491] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.009 [2024-05-12 05:04:30.923502] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.009 [2024-05-12 05:04:30.923526] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.009 [2024-05-12 05:04:30.923539] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:24.009 [2024-05-12 05:04:30.923567] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.009 [2024-05-12 05:04:30.923610] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.009 [2024-05-12 05:04:31.003663] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.009 [2024-05-12 05:04:31.003717] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:24.009 [2024-05-12 05:04:31.003736] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.009 [2024-05-12 05:04:31.003746] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.009 [2024-05-12 05:04:31.034675] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.009 [2024-05-12 05:04:31.034709] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:24.009 [2024-05-12 05:04:31.034726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.009 [2024-05-12 05:04:31.034736] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.009 [2024-05-12 05:04:31.034815] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.009 [2024-05-12 05:04:31.034830] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:24.009 [2024-05-12 05:04:31.034842] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.009 [2024-05-12 05:04:31.034854] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.009 [2024-05-12 05:04:31.034906] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.009 [2024-05-12 05:04:31.034919] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:24.009 [2024-05-12 05:04:31.034931] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.009 [2024-05-12 05:04:31.034940] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.009 [2024-05-12 05:04:31.035038] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.009 [2024-05-12 05:04:31.035054] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:24.009 [2024-05-12 05:04:31.035066] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.009 [2024-05-12 05:04:31.035076] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.009 [2024-05-12 05:04:31.035124] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.009 [2024-05-12 05:04:31.035139] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:24.009 [2024-05-12 05:04:31.035151] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.009 [2024-05-12 05:04:31.035161] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.009 [2024-05-12 05:04:31.035204] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.009 [2024-05-12 05:04:31.035254] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:24.009 [2024-05-12 05:04:31.035287] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.009 [2024-05-12 05:04:31.035298] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.009 [2024-05-12 05:04:31.035370] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.009 [2024-05-12 05:04:31.035386] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:24.009 [2024-05-12 05:04:31.035399] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.009 [2024-05-12 05:04:31.035409] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.009 [2024-05-12 05:04:31.035552] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 340.693 ms, result 0 00:24:24.009 true 00:24:24.009 05:04:31 -- ftl/dirty_shutdown.sh@83 -- # kill -9 76335 00:24:24.009 05:04:31 -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid76335 00:24:24.009 05:04:31 -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:24:24.268 [2024-05-12 05:04:31.154063] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:24.268 [2024-05-12 05:04:31.154282] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77283 ] 00:24:24.268 [2024-05-12 05:04:31.320765] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.528 [2024-05-12 05:04:31.463292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:30.217  Copying: 219/1024 [MB] (219 MBps) Copying: 442/1024 [MB] (222 MBps) Copying: 664/1024 [MB] (222 MBps) Copying: 879/1024 [MB] (215 MBps) Copying: 1024/1024 [MB] (average 218 MBps) 00:24:30.217 00:24:30.217 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 76335 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:24:30.217 05:04:37 -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:30.217 [2024-05-12 05:04:37.321375] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:30.217 [2024-05-12 05:04:37.321538] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77351 ] 00:24:30.476 [2024-05-12 05:04:37.490596] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.736 [2024-05-12 05:04:37.635141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:30.994 [2024-05-12 05:04:37.887894] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:30.994 [2024-05-12 05:04:37.888200] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:30.995 [2024-05-12 05:04:37.949385] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:24:30.995 [2024-05-12 05:04:37.949666] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:24:30.995 [2024-05-12 05:04:37.949930] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:24:31.255 [2024-05-12 05:04:38.210693] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.255 [2024-05-12 05:04:38.210739] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:31.255 [2024-05-12 05:04:38.210772] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:31.255 [2024-05-12 05:04:38.210782] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.255 [2024-05-12 05:04:38.210836] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.255 [2024-05-12 05:04:38.210853] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:31.255 [2024-05-12 05:04:38.210864] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:24:31.255 [2024-05-12 05:04:38.210873] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.255 [2024-05-12 05:04:38.210903] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:31.255 [2024-05-12 05:04:38.211771] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:31.255 [2024-05-12 05:04:38.211802] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.255 [2024-05-12 05:04:38.211814] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:31.255 [2024-05-12 05:04:38.211829] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.908 ms 00:24:31.255 [2024-05-12 05:04:38.211840] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.255 [2024-05-12 05:04:38.212989] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:31.255 [2024-05-12 05:04:38.226715] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.255 [2024-05-12 05:04:38.226899] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:31.255 [2024-05-12 05:04:38.227037] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.728 ms 00:24:31.255 [2024-05-12 05:04:38.227059] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.255 [2024-05-12 05:04:38.227123] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.255 [2024-05-12 05:04:38.227140] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:31.255 [2024-05-12 05:04:38.227152] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:24:31.255 [2024-05-12 05:04:38.227166] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.255 [2024-05-12 05:04:38.231399] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.255 [2024-05-12 05:04:38.231435] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:31.255 [2024-05-12 05:04:38.231465] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.112 ms 00:24:31.255 [2024-05-12 05:04:38.231475] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.255 [2024-05-12 05:04:38.231562] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.255 [2024-05-12 05:04:38.231580] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:31.255 [2024-05-12 05:04:38.231593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:24:31.255 [2024-05-12 05:04:38.231602] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.255 [2024-05-12 05:04:38.231662] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.255 [2024-05-12 05:04:38.231676] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:31.255 [2024-05-12 05:04:38.231686] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:31.255 [2024-05-12 05:04:38.231694] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.255 [2024-05-12 05:04:38.231727] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:31.255 [2024-05-12 05:04:38.235396] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.255 [2024-05-12 05:04:38.235426] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:31.255 [2024-05-12 05:04:38.235455] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.681 ms 00:24:31.255 [2024-05-12 05:04:38.235464] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.255 [2024-05-12 05:04:38.235499] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.255 [2024-05-12 05:04:38.235512] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:31.255 [2024-05-12 05:04:38.235535] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:31.255 [2024-05-12 05:04:38.235544] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.255 [2024-05-12 05:04:38.235570] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:31.255 [2024-05-12 05:04:38.235595] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:24:31.255 [2024-05-12 05:04:38.235627] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:31.255 [2024-05-12 05:04:38.235644] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:24:31.255 [2024-05-12 05:04:38.235709] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:24:31.255 [2024-05-12 05:04:38.235726] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:31.255 [2024-05-12 05:04:38.235738] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:24:31.255 [2024-05-12 05:04:38.235751] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:31.255 [2024-05-12 05:04:38.235761] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:31.255 [2024-05-12 05:04:38.235771] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:31.255 [2024-05-12 05:04:38.235780] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:31.255 [2024-05-12 05:04:38.235788] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:24:31.256 [2024-05-12 05:04:38.235797] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:24:31.256 [2024-05-12 05:04:38.235807] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.256 [2024-05-12 05:04:38.235815] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:31.256 [2024-05-12 05:04:38.235825] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:24:31.256 [2024-05-12 05:04:38.235837] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.256 [2024-05-12 05:04:38.235909] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.256 [2024-05-12 05:04:38.235925] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:31.256 [2024-05-12 05:04:38.235934] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:24:31.256 [2024-05-12 05:04:38.235943] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.256 [2024-05-12 05:04:38.236012] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:31.256 [2024-05-12 05:04:38.236027] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:31.256 [2024-05-12 05:04:38.236037] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:31.256 [2024-05-12 05:04:38.236047] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:31.256 [2024-05-12 05:04:38.236060] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:31.256 [2024-05-12 05:04:38.236069] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:31.256 [2024-05-12 05:04:38.236078] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:31.256 [2024-05-12 05:04:38.236087] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:31.256 [2024-05-12 05:04:38.236096] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:31.256 [2024-05-12 05:04:38.236104] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:31.256 [2024-05-12 05:04:38.236112] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:31.256 [2024-05-12 05:04:38.236121] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:31.256 [2024-05-12 05:04:38.236129] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:31.256 [2024-05-12 05:04:38.236137] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:31.256 [2024-05-12 05:04:38.236145] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:24:31.256 [2024-05-12 05:04:38.236154] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:31.256 [2024-05-12 05:04:38.236162] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:31.256 [2024-05-12 05:04:38.236181] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:24:31.256 [2024-05-12 05:04:38.236189] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:31.256 [2024-05-12 05:04:38.236198] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:24:31.256 [2024-05-12 05:04:38.236264] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:24:31.256 [2024-05-12 05:04:38.236295] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:24:31.256 [2024-05-12 05:04:38.236305] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:31.256 [2024-05-12 05:04:38.236314] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:31.256 [2024-05-12 05:04:38.236323] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:31.256 [2024-05-12 05:04:38.236332] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:31.256 [2024-05-12 05:04:38.236342] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:24:31.256 [2024-05-12 05:04:38.236351] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:31.256 [2024-05-12 05:04:38.236360] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:31.256 [2024-05-12 05:04:38.236369] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:31.256 [2024-05-12 05:04:38.236378] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:31.256 [2024-05-12 05:04:38.236392] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:31.256 [2024-05-12 05:04:38.236401] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:24:31.256 [2024-05-12 05:04:38.236410] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:31.256 [2024-05-12 05:04:38.236418] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:31.256 [2024-05-12 05:04:38.236427] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:31.256 [2024-05-12 05:04:38.236436] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:31.256 [2024-05-12 05:04:38.236446] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:31.256 [2024-05-12 05:04:38.236455] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:24:31.256 [2024-05-12 05:04:38.236464] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:31.256 [2024-05-12 05:04:38.236473] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:31.256 [2024-05-12 05:04:38.236484] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:31.256 [2024-05-12 05:04:38.236494] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:31.256 [2024-05-12 05:04:38.236503] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:31.256 [2024-05-12 05:04:38.236514] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:31.256 [2024-05-12 05:04:38.236523] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:31.256 [2024-05-12 05:04:38.236532] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:31.256 [2024-05-12 05:04:38.236557] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:31.256 [2024-05-12 05:04:38.236566] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:31.256 [2024-05-12 05:04:38.236591] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:31.256 [2024-05-12 05:04:38.236601] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:31.256 [2024-05-12 05:04:38.236614] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:31.256 [2024-05-12 05:04:38.236625] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:31.256 [2024-05-12 05:04:38.236651] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:24:31.256 [2024-05-12 05:04:38.236660] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:24:31.256 [2024-05-12 05:04:38.236670] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:24:31.256 [2024-05-12 05:04:38.236679] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:24:31.256 [2024-05-12 05:04:38.236689] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:24:31.256 [2024-05-12 05:04:38.236715] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:24:31.256 [2024-05-12 05:04:38.236725] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:24:31.256 [2024-05-12 05:04:38.236735] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:24:31.256 [2024-05-12 05:04:38.236745] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:24:31.256 [2024-05-12 05:04:38.236755] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:24:31.256 [2024-05-12 05:04:38.236765] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:24:31.256 [2024-05-12 05:04:38.236776] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:24:31.256 [2024-05-12 05:04:38.236785] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:31.256 [2024-05-12 05:04:38.236809] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:31.256 [2024-05-12 05:04:38.236821] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:31.256 [2024-05-12 05:04:38.236831] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:31.256 [2024-05-12 05:04:38.236842] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:31.256 [2024-05-12 05:04:38.236852] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:31.256 [2024-05-12 05:04:38.236863] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.256 [2024-05-12 05:04:38.236875] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:31.256 [2024-05-12 05:04:38.236890] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.886 ms 00:24:31.256 [2024-05-12 05:04:38.236900] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.256 [2024-05-12 05:04:38.252419] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.256 [2024-05-12 05:04:38.252459] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:31.256 [2024-05-12 05:04:38.252497] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.464 ms 00:24:31.256 [2024-05-12 05:04:38.252507] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.256 [2024-05-12 05:04:38.252608] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.256 [2024-05-12 05:04:38.252636] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:31.256 [2024-05-12 05:04:38.252647] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:24:31.256 [2024-05-12 05:04:38.252655] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.256 [2024-05-12 05:04:38.297476] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.256 [2024-05-12 05:04:38.297522] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:31.256 [2024-05-12 05:04:38.297553] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.759 ms 00:24:31.256 [2024-05-12 05:04:38.297564] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.256 [2024-05-12 05:04:38.297619] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.256 [2024-05-12 05:04:38.297633] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:31.256 [2024-05-12 05:04:38.297644] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:31.256 [2024-05-12 05:04:38.297653] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.256 [2024-05-12 05:04:38.297985] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.256 [2024-05-12 05:04:38.298006] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:31.256 [2024-05-12 05:04:38.298017] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:24:31.256 [2024-05-12 05:04:38.298026] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.257 [2024-05-12 05:04:38.298149] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.257 [2024-05-12 05:04:38.298166] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:31.257 [2024-05-12 05:04:38.298177] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:24:31.257 [2024-05-12 05:04:38.298185] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.257 [2024-05-12 05:04:38.312457] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.257 [2024-05-12 05:04:38.312494] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:31.257 [2024-05-12 05:04:38.312525] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.248 ms 00:24:31.257 [2024-05-12 05:04:38.312536] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.257 [2024-05-12 05:04:38.326188] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:31.257 [2024-05-12 05:04:38.326248] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:31.257 [2024-05-12 05:04:38.326285] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.257 [2024-05-12 05:04:38.326295] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:31.257 [2024-05-12 05:04:38.326306] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.582 ms 00:24:31.257 [2024-05-12 05:04:38.326316] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.257 [2024-05-12 05:04:38.350373] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.257 [2024-05-12 05:04:38.350409] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:31.257 [2024-05-12 05:04:38.350440] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.989 ms 00:24:31.257 [2024-05-12 05:04:38.350450] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.257 [2024-05-12 05:04:38.363272] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.257 [2024-05-12 05:04:38.363306] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:31.257 [2024-05-12 05:04:38.363336] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.776 ms 00:24:31.257 [2024-05-12 05:04:38.363345] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.257 [2024-05-12 05:04:38.375544] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.257 [2024-05-12 05:04:38.375592] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:31.257 [2024-05-12 05:04:38.375649] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.163 ms 00:24:31.257 [2024-05-12 05:04:38.375657] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.257 [2024-05-12 05:04:38.376007] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.257 [2024-05-12 05:04:38.376041] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:31.257 [2024-05-12 05:04:38.376052] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:24:31.257 [2024-05-12 05:04:38.376062] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.516 [2024-05-12 05:04:38.435719] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.516 [2024-05-12 05:04:38.435776] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:31.516 [2024-05-12 05:04:38.435793] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.637 ms 00:24:31.516 [2024-05-12 05:04:38.435803] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.516 [2024-05-12 05:04:38.445931] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:31.516 [2024-05-12 05:04:38.447840] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.516 [2024-05-12 05:04:38.447870] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:31.516 [2024-05-12 05:04:38.447884] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.989 ms 00:24:31.516 [2024-05-12 05:04:38.447893] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.516 [2024-05-12 05:04:38.447965] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.516 [2024-05-12 05:04:38.447982] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:31.516 [2024-05-12 05:04:38.447994] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:31.516 [2024-05-12 05:04:38.448002] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.516 [2024-05-12 05:04:38.448068] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.516 [2024-05-12 05:04:38.448083] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:31.516 [2024-05-12 05:04:38.448097] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:24:31.516 [2024-05-12 05:04:38.448106] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.516 [2024-05-12 05:04:38.449809] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.516 [2024-05-12 05:04:38.449839] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:24:31.516 [2024-05-12 05:04:38.449851] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.685 ms 00:24:31.516 [2024-05-12 05:04:38.449860] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.516 [2024-05-12 05:04:38.449891] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.516 [2024-05-12 05:04:38.449910] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:31.516 [2024-05-12 05:04:38.449920] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:31.516 [2024-05-12 05:04:38.449929] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.516 [2024-05-12 05:04:38.449967] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:31.516 [2024-05-12 05:04:38.449981] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.516 [2024-05-12 05:04:38.449990] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:31.516 [2024-05-12 05:04:38.449999] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:24:31.516 [2024-05-12 05:04:38.450008] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.516 [2024-05-12 05:04:38.474174] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.516 [2024-05-12 05:04:38.474210] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:31.516 [2024-05-12 05:04:38.474252] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.147 ms 00:24:31.516 [2024-05-12 05:04:38.474270] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.516 [2024-05-12 05:04:38.474355] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.516 [2024-05-12 05:04:38.474371] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:31.516 [2024-05-12 05:04:38.474382] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:24:31.516 [2024-05-12 05:04:38.474391] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.516 [2024-05-12 05:04:38.475708] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 264.469 ms, result 0 00:25:14.835  Copying: 23/1024 [MB] (23 MBps) Copying: 47/1024 [MB] (23 MBps) Copying: 71/1024 [MB] (24 MBps) Copying: 94/1024 [MB] (23 MBps) Copying: 118/1024 [MB] (23 MBps) Copying: 142/1024 [MB] (24 MBps) Copying: 167/1024 [MB] (24 MBps) Copying: 191/1024 [MB] (24 MBps) Copying: 216/1024 [MB] (24 MBps) Copying: 241/1024 [MB] (24 MBps) Copying: 265/1024 [MB] (24 MBps) Copying: 290/1024 [MB] (24 MBps) Copying: 314/1024 [MB] (24 MBps) Copying: 338/1024 [MB] (24 MBps) Copying: 363/1024 [MB] (24 MBps) Copying: 389/1024 [MB] (25 MBps) Copying: 414/1024 [MB] (25 MBps) Copying: 438/1024 [MB] (24 MBps) Copying: 463/1024 [MB] (24 MBps) Copying: 488/1024 [MB] (24 MBps) Copying: 512/1024 [MB] (24 MBps) Copying: 535/1024 [MB] (23 MBps) Copying: 559/1024 [MB] (23 MBps) Copying: 584/1024 [MB] (24 MBps) Copying: 608/1024 [MB] (24 MBps) Copying: 633/1024 [MB] (24 MBps) Copying: 657/1024 [MB] (24 MBps) Copying: 681/1024 [MB] (24 MBps) Copying: 706/1024 [MB] (24 MBps) Copying: 731/1024 [MB] (24 MBps) Copying: 755/1024 [MB] (24 MBps) Copying: 779/1024 [MB] (24 MBps) Copying: 804/1024 [MB] (24 MBps) Copying: 828/1024 [MB] (24 MBps) Copying: 852/1024 [MB] (24 MBps) Copying: 877/1024 [MB] (24 MBps) Copying: 901/1024 [MB] (24 MBps) Copying: 925/1024 [MB] (23 MBps) Copying: 949/1024 [MB] (24 MBps) Copying: 973/1024 [MB] (24 MBps) Copying: 997/1024 [MB] (24 MBps) Copying: 1022/1024 [MB] (24 MBps) Copying: 1048400/1048576 [kB] (1808 kBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-05-12 05:05:21.711534] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.835 [2024-05-12 05:05:21.711658] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:14.835 [2024-05-12 05:05:21.711695] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:14.835 [2024-05-12 05:05:21.711707] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.835 [2024-05-12 05:05:21.714043] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:14.835 [2024-05-12 05:05:21.720353] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.835 [2024-05-12 05:05:21.720395] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:14.835 [2024-05-12 05:05:21.720411] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.204 ms 00:25:14.835 [2024-05-12 05:05:21.720423] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.835 [2024-05-12 05:05:21.731592] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.835 [2024-05-12 05:05:21.731630] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:14.835 [2024-05-12 05:05:21.731646] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.410 ms 00:25:14.835 [2024-05-12 05:05:21.731671] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.835 [2024-05-12 05:05:21.752665] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.835 [2024-05-12 05:05:21.752715] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:14.835 [2024-05-12 05:05:21.752748] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.975 ms 00:25:14.835 [2024-05-12 05:05:21.752758] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.835 [2024-05-12 05:05:21.758036] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.835 [2024-05-12 05:05:21.758073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:25:14.835 [2024-05-12 05:05:21.758087] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.243 ms 00:25:14.836 [2024-05-12 05:05:21.758095] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.836 [2024-05-12 05:05:21.782356] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.836 [2024-05-12 05:05:21.782394] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:14.836 [2024-05-12 05:05:21.782408] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.182 ms 00:25:14.836 [2024-05-12 05:05:21.782417] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.836 [2024-05-12 05:05:21.796959] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.836 [2024-05-12 05:05:21.796993] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:14.836 [2024-05-12 05:05:21.797007] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.505 ms 00:25:14.836 [2024-05-12 05:05:21.797016] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.836 [2024-05-12 05:05:21.910278] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.836 [2024-05-12 05:05:21.910317] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:14.836 [2024-05-12 05:05:21.910350] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 113.224 ms 00:25:14.836 [2024-05-12 05:05:21.910382] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.836 [2024-05-12 05:05:21.935234] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.836 [2024-05-12 05:05:21.935267] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:14.836 [2024-05-12 05:05:21.935281] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.833 ms 00:25:14.836 [2024-05-12 05:05:21.935290] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.836 [2024-05-12 05:05:21.959619] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.836 [2024-05-12 05:05:21.959661] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:14.836 [2024-05-12 05:05:21.959675] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.294 ms 00:25:14.836 [2024-05-12 05:05:21.959697] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.096 [2024-05-12 05:05:21.984463] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.096 [2024-05-12 05:05:21.984498] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:15.096 [2024-05-12 05:05:21.984528] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.730 ms 00:25:15.096 [2024-05-12 05:05:21.984537] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.096 [2024-05-12 05:05:22.008512] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.096 [2024-05-12 05:05:22.008565] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:15.096 [2024-05-12 05:05:22.008595] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.900 ms 00:25:15.096 [2024-05-12 05:05:22.008604] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.096 [2024-05-12 05:05:22.008654] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:15.096 [2024-05-12 05:05:22.008672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 130304 / 261120 wr_cnt: 1 state: open 00:25:15.096 [2024-05-12 05:05:22.008684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:15.096 [2024-05-12 05:05:22.008693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:15.096 [2024-05-12 05:05:22.008702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:15.096 [2024-05-12 05:05:22.008711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:15.096 [2024-05-12 05:05:22.008720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:15.096 [2024-05-12 05:05:22.008728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:15.096 [2024-05-12 05:05:22.008737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:15.096 [2024-05-12 05:05:22.008746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:15.096 [2024-05-12 05:05:22.008755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:15.096 [2024-05-12 05:05:22.008764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:15.096 [2024-05-12 05:05:22.008773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:15.096 [2024-05-12 05:05:22.008782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:15.096 [2024-05-12 05:05:22.008790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:15.096 [2024-05-12 05:05:22.008799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:15.096 [2024-05-12 05:05:22.008808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:15.096 [2024-05-12 05:05:22.008817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:15.096 [2024-05-12 05:05:22.008826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:15.096 [2024-05-12 05:05:22.008835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:15.096 [2024-05-12 05:05:22.008844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:15.096 [2024-05-12 05:05:22.008852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:15.096 [2024-05-12 05:05:22.008861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:15.096 [2024-05-12 05:05:22.008870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:15.096 [2024-05-12 05:05:22.008879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:15.096 [2024-05-12 05:05:22.008888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:15.096 [2024-05-12 05:05:22.008896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:15.096 [2024-05-12 05:05:22.008910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.008920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.008928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.008937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.008947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.008956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.008965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.008974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.008983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.008992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:15.097 [2024-05-12 05:05:22.009676] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:15.097 [2024-05-12 05:05:22.009701] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9c535803-3c77-4cb2-a504-54d4ad3e3ace 00:25:15.097 [2024-05-12 05:05:22.009711] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 130304 00:25:15.097 [2024-05-12 05:05:22.009726] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 131264 00:25:15.097 [2024-05-12 05:05:22.009735] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 130304 00:25:15.097 [2024-05-12 05:05:22.009745] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0074 00:25:15.097 [2024-05-12 05:05:22.009755] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:15.097 [2024-05-12 05:05:22.009781] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:15.097 [2024-05-12 05:05:22.009790] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:15.097 [2024-05-12 05:05:22.009799] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:15.097 [2024-05-12 05:05:22.009817] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:15.097 [2024-05-12 05:05:22.009828] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.097 [2024-05-12 05:05:22.009837] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:15.097 [2024-05-12 05:05:22.009848] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.175 ms 00:25:15.097 [2024-05-12 05:05:22.009862] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.097 [2024-05-12 05:05:22.023000] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.097 [2024-05-12 05:05:22.023032] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:15.097 [2024-05-12 05:05:22.023046] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.101 ms 00:25:15.097 [2024-05-12 05:05:22.023055] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.097 [2024-05-12 05:05:22.023297] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.097 [2024-05-12 05:05:22.023314] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:15.098 [2024-05-12 05:05:22.023325] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:25:15.098 [2024-05-12 05:05:22.023335] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.098 [2024-05-12 05:05:22.058323] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.098 [2024-05-12 05:05:22.058359] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:15.098 [2024-05-12 05:05:22.058389] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.098 [2024-05-12 05:05:22.058398] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.098 [2024-05-12 05:05:22.058445] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.098 [2024-05-12 05:05:22.058458] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:15.098 [2024-05-12 05:05:22.058467] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.098 [2024-05-12 05:05:22.058476] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.098 [2024-05-12 05:05:22.058560] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.098 [2024-05-12 05:05:22.058577] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:15.098 [2024-05-12 05:05:22.058587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.098 [2024-05-12 05:05:22.058596] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.098 [2024-05-12 05:05:22.058631] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.098 [2024-05-12 05:05:22.058642] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:15.098 [2024-05-12 05:05:22.058650] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.098 [2024-05-12 05:05:22.058658] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.098 [2024-05-12 05:05:22.132293] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.098 [2024-05-12 05:05:22.132346] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:15.098 [2024-05-12 05:05:22.132378] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.098 [2024-05-12 05:05:22.132388] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.098 [2024-05-12 05:05:22.162230] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.098 [2024-05-12 05:05:22.162262] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:15.098 [2024-05-12 05:05:22.162292] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.098 [2024-05-12 05:05:22.162302] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.098 [2024-05-12 05:05:22.162383] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.098 [2024-05-12 05:05:22.162399] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:15.098 [2024-05-12 05:05:22.162409] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.098 [2024-05-12 05:05:22.162418] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.098 [2024-05-12 05:05:22.162462] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.098 [2024-05-12 05:05:22.162476] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:15.098 [2024-05-12 05:05:22.162486] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.098 [2024-05-12 05:05:22.162495] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.098 [2024-05-12 05:05:22.162641] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.098 [2024-05-12 05:05:22.162663] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:15.098 [2024-05-12 05:05:22.162674] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.098 [2024-05-12 05:05:22.162683] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.098 [2024-05-12 05:05:22.162726] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.098 [2024-05-12 05:05:22.162743] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:15.098 [2024-05-12 05:05:22.162754] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.098 [2024-05-12 05:05:22.162763] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.098 [2024-05-12 05:05:22.162801] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.098 [2024-05-12 05:05:22.162826] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:15.098 [2024-05-12 05:05:22.162837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.098 [2024-05-12 05:05:22.162847] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.098 [2024-05-12 05:05:22.162893] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.098 [2024-05-12 05:05:22.162908] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:15.098 [2024-05-12 05:05:22.162918] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.098 [2024-05-12 05:05:22.162928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.098 [2024-05-12 05:05:22.163137] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 452.503 ms, result 0 00:25:16.999 00:25:16.999 00:25:16.999 05:05:23 -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:18.377 05:05:25 -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:18.636 [2024-05-12 05:05:25.593435] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:18.636 [2024-05-12 05:05:25.593597] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77826 ] 00:25:18.636 [2024-05-12 05:05:25.760622] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.895 [2024-05-12 05:05:25.952323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:19.154 [2024-05-12 05:05:26.200233] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:19.154 [2024-05-12 05:05:26.200318] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:19.415 [2024-05-12 05:05:26.348652] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.415 [2024-05-12 05:05:26.348695] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:19.415 [2024-05-12 05:05:26.348712] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:19.415 [2024-05-12 05:05:26.348722] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.415 [2024-05-12 05:05:26.348781] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.415 [2024-05-12 05:05:26.348796] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:19.415 [2024-05-12 05:05:26.348807] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:25:19.415 [2024-05-12 05:05:26.348816] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.415 [2024-05-12 05:05:26.348840] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:19.415 [2024-05-12 05:05:26.349601] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:19.415 [2024-05-12 05:05:26.349638] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.415 [2024-05-12 05:05:26.349649] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:19.415 [2024-05-12 05:05:26.349659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.803 ms 00:25:19.415 [2024-05-12 05:05:26.349669] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.415 [2024-05-12 05:05:26.350775] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:19.415 [2024-05-12 05:05:26.363490] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.415 [2024-05-12 05:05:26.363528] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:19.415 [2024-05-12 05:05:26.363564] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.716 ms 00:25:19.415 [2024-05-12 05:05:26.363575] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.415 [2024-05-12 05:05:26.363647] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.415 [2024-05-12 05:05:26.363664] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:19.415 [2024-05-12 05:05:26.363675] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:25:19.415 [2024-05-12 05:05:26.363684] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.415 [2024-05-12 05:05:26.367750] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.415 [2024-05-12 05:05:26.367783] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:19.415 [2024-05-12 05:05:26.367795] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.990 ms 00:25:19.415 [2024-05-12 05:05:26.367804] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.415 [2024-05-12 05:05:26.367890] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.415 [2024-05-12 05:05:26.367907] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:19.415 [2024-05-12 05:05:26.367917] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:25:19.415 [2024-05-12 05:05:26.367927] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.415 [2024-05-12 05:05:26.367984] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.415 [2024-05-12 05:05:26.368004] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:19.415 [2024-05-12 05:05:26.368014] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:19.415 [2024-05-12 05:05:26.368024] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.415 [2024-05-12 05:05:26.368055] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:19.415 [2024-05-12 05:05:26.371699] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.415 [2024-05-12 05:05:26.371729] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:19.415 [2024-05-12 05:05:26.371758] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.654 ms 00:25:19.415 [2024-05-12 05:05:26.371767] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.415 [2024-05-12 05:05:26.371804] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.415 [2024-05-12 05:05:26.371817] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:19.415 [2024-05-12 05:05:26.371828] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:19.415 [2024-05-12 05:05:26.371838] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.415 [2024-05-12 05:05:26.371862] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:19.415 [2024-05-12 05:05:26.371887] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:25:19.415 [2024-05-12 05:05:26.371923] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:19.415 [2024-05-12 05:05:26.371939] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:25:19.415 [2024-05-12 05:05:26.372005] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:25:19.415 [2024-05-12 05:05:26.372018] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:19.415 [2024-05-12 05:05:26.372031] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:25:19.415 [2024-05-12 05:05:26.372043] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:19.415 [2024-05-12 05:05:26.372054] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:19.415 [2024-05-12 05:05:26.372074] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:19.415 [2024-05-12 05:05:26.372083] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:19.415 [2024-05-12 05:05:26.372091] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:25:19.415 [2024-05-12 05:05:26.372100] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:25:19.415 [2024-05-12 05:05:26.372110] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.415 [2024-05-12 05:05:26.372120] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:19.415 [2024-05-12 05:05:26.372130] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:25:19.415 [2024-05-12 05:05:26.372139] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.415 [2024-05-12 05:05:26.372211] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.415 [2024-05-12 05:05:26.372532] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:19.415 [2024-05-12 05:05:26.372599] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:25:19.415 [2024-05-12 05:05:26.372745] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.416 [2024-05-12 05:05:26.372864] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:19.416 [2024-05-12 05:05:26.373065] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:19.416 [2024-05-12 05:05:26.373087] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:19.416 [2024-05-12 05:05:26.373099] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:19.416 [2024-05-12 05:05:26.373109] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:19.416 [2024-05-12 05:05:26.373118] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:19.416 [2024-05-12 05:05:26.373128] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:19.416 [2024-05-12 05:05:26.373137] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:19.416 [2024-05-12 05:05:26.373146] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:19.416 [2024-05-12 05:05:26.373155] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:19.416 [2024-05-12 05:05:26.373164] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:19.416 [2024-05-12 05:05:26.373173] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:19.416 [2024-05-12 05:05:26.373182] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:19.416 [2024-05-12 05:05:26.373191] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:19.416 [2024-05-12 05:05:26.373200] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:25:19.416 [2024-05-12 05:05:26.373209] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:19.416 [2024-05-12 05:05:26.373249] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:19.416 [2024-05-12 05:05:26.373262] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:25:19.416 [2024-05-12 05:05:26.373271] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:19.416 [2024-05-12 05:05:26.373281] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:25:19.416 [2024-05-12 05:05:26.373291] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:25:19.416 [2024-05-12 05:05:26.373317] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:25:19.416 [2024-05-12 05:05:26.373327] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:19.416 [2024-05-12 05:05:26.373369] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:19.416 [2024-05-12 05:05:26.373379] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:19.416 [2024-05-12 05:05:26.373388] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:19.416 [2024-05-12 05:05:26.373398] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:25:19.416 [2024-05-12 05:05:26.373407] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:19.416 [2024-05-12 05:05:26.373416] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:19.416 [2024-05-12 05:05:26.373425] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:19.416 [2024-05-12 05:05:26.373435] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:19.416 [2024-05-12 05:05:26.373444] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:19.416 [2024-05-12 05:05:26.373453] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:25:19.416 [2024-05-12 05:05:26.373462] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:19.416 [2024-05-12 05:05:26.373472] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:19.416 [2024-05-12 05:05:26.373481] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:19.416 [2024-05-12 05:05:26.373490] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:19.416 [2024-05-12 05:05:26.373500] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:19.416 [2024-05-12 05:05:26.373510] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:25:19.416 [2024-05-12 05:05:26.373520] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:19.416 [2024-05-12 05:05:26.373529] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:19.416 [2024-05-12 05:05:26.373539] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:19.416 [2024-05-12 05:05:26.373549] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:19.416 [2024-05-12 05:05:26.373574] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:19.416 [2024-05-12 05:05:26.373589] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:19.416 [2024-05-12 05:05:26.373614] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:19.416 [2024-05-12 05:05:26.373623] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:19.416 [2024-05-12 05:05:26.373633] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:19.416 [2024-05-12 05:05:26.373642] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:19.416 [2024-05-12 05:05:26.373666] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:19.416 [2024-05-12 05:05:26.373677] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:19.416 [2024-05-12 05:05:26.373689] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:19.416 [2024-05-12 05:05:26.373700] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:19.416 [2024-05-12 05:05:26.373725] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:25:19.416 [2024-05-12 05:05:26.373735] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:25:19.416 [2024-05-12 05:05:26.373744] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:25:19.416 [2024-05-12 05:05:26.373754] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:25:19.416 [2024-05-12 05:05:26.373763] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:25:19.416 [2024-05-12 05:05:26.373773] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:25:19.416 [2024-05-12 05:05:26.373782] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:25:19.416 [2024-05-12 05:05:26.373791] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:25:19.416 [2024-05-12 05:05:26.373801] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:25:19.416 [2024-05-12 05:05:26.373810] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:25:19.416 [2024-05-12 05:05:26.373820] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:25:19.416 [2024-05-12 05:05:26.373830] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:25:19.416 [2024-05-12 05:05:26.373839] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:19.416 [2024-05-12 05:05:26.373850] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:19.416 [2024-05-12 05:05:26.373861] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:19.416 [2024-05-12 05:05:26.373871] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:19.416 [2024-05-12 05:05:26.373881] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:19.416 [2024-05-12 05:05:26.373891] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:19.416 [2024-05-12 05:05:26.373902] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.416 [2024-05-12 05:05:26.373912] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:19.416 [2024-05-12 05:05:26.373923] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.076 ms 00:25:19.416 [2024-05-12 05:05:26.373932] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.416 [2024-05-12 05:05:26.388567] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.416 [2024-05-12 05:05:26.388602] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:19.416 [2024-05-12 05:05:26.388649] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.569 ms 00:25:19.416 [2024-05-12 05:05:26.388659] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.416 [2024-05-12 05:05:26.388735] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.416 [2024-05-12 05:05:26.388753] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:19.416 [2024-05-12 05:05:26.388763] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:25:19.416 [2024-05-12 05:05:26.388773] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.416 [2024-05-12 05:05:26.425514] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.416 [2024-05-12 05:05:26.425555] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:19.416 [2024-05-12 05:05:26.425571] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.686 ms 00:25:19.416 [2024-05-12 05:05:26.425585] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.416 [2024-05-12 05:05:26.425632] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.416 [2024-05-12 05:05:26.425646] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:19.416 [2024-05-12 05:05:26.425657] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:19.416 [2024-05-12 05:05:26.425666] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.416 [2024-05-12 05:05:26.425980] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.416 [2024-05-12 05:05:26.425996] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:19.416 [2024-05-12 05:05:26.426006] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:25:19.416 [2024-05-12 05:05:26.426015] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.416 [2024-05-12 05:05:26.426133] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.416 [2024-05-12 05:05:26.426148] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:19.416 [2024-05-12 05:05:26.426159] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:25:19.416 [2024-05-12 05:05:26.426168] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.416 [2024-05-12 05:05:26.439832] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.416 [2024-05-12 05:05:26.439867] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:19.416 [2024-05-12 05:05:26.439882] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.641 ms 00:25:19.416 [2024-05-12 05:05:26.439891] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.416 [2024-05-12 05:05:26.452916] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:25:19.417 [2024-05-12 05:05:26.452954] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:19.417 [2024-05-12 05:05:26.452969] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.417 [2024-05-12 05:05:26.452978] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:19.417 [2024-05-12 05:05:26.452989] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.974 ms 00:25:19.417 [2024-05-12 05:05:26.452998] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.417 [2024-05-12 05:05:26.476144] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.417 [2024-05-12 05:05:26.476180] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:19.417 [2024-05-12 05:05:26.476196] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.105 ms 00:25:19.417 [2024-05-12 05:05:26.476206] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.417 [2024-05-12 05:05:26.488635] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.417 [2024-05-12 05:05:26.488673] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:19.417 [2024-05-12 05:05:26.488687] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.341 ms 00:25:19.417 [2024-05-12 05:05:26.488696] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.417 [2024-05-12 05:05:26.500977] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.417 [2024-05-12 05:05:26.501012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:19.417 [2024-05-12 05:05:26.501026] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.245 ms 00:25:19.417 [2024-05-12 05:05:26.501034] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.417 [2024-05-12 05:05:26.501477] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.417 [2024-05-12 05:05:26.501499] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:19.417 [2024-05-12 05:05:26.501510] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.352 ms 00:25:19.417 [2024-05-12 05:05:26.501520] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.676 [2024-05-12 05:05:26.565929] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.676 [2024-05-12 05:05:26.565978] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:19.676 [2024-05-12 05:05:26.565995] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.386 ms 00:25:19.676 [2024-05-12 05:05:26.566005] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.676 [2024-05-12 05:05:26.577150] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:19.676 [2024-05-12 05:05:26.579312] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.676 [2024-05-12 05:05:26.579334] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:19.676 [2024-05-12 05:05:26.579347] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.254 ms 00:25:19.676 [2024-05-12 05:05:26.579357] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.676 [2024-05-12 05:05:26.579444] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.676 [2024-05-12 05:05:26.579461] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:19.676 [2024-05-12 05:05:26.579472] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:19.676 [2024-05-12 05:05:26.579481] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.676 [2024-05-12 05:05:26.580736] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.676 [2024-05-12 05:05:26.580893] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:19.676 [2024-05-12 05:05:26.580998] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.210 ms 00:25:19.676 [2024-05-12 05:05:26.581118] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.676 [2024-05-12 05:05:26.582982] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.676 [2024-05-12 05:05:26.583153] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:25:19.676 [2024-05-12 05:05:26.583257] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.800 ms 00:25:19.676 [2024-05-12 05:05:26.583373] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.676 [2024-05-12 05:05:26.583448] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.676 [2024-05-12 05:05:26.583555] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:19.676 [2024-05-12 05:05:26.583664] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:19.676 [2024-05-12 05:05:26.583720] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.676 [2024-05-12 05:05:26.583872] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:19.676 [2024-05-12 05:05:26.583899] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.676 [2024-05-12 05:05:26.583911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:19.676 [2024-05-12 05:05:26.583927] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:25:19.676 [2024-05-12 05:05:26.583937] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.676 [2024-05-12 05:05:26.610558] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.676 [2024-05-12 05:05:26.610731] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:19.676 [2024-05-12 05:05:26.610859] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.593 ms 00:25:19.676 [2024-05-12 05:05:26.610907] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.676 [2024-05-12 05:05:26.611066] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.676 [2024-05-12 05:05:26.611127] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:19.676 [2024-05-12 05:05:26.611267] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:25:19.676 [2024-05-12 05:05:26.611319] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.676 [2024-05-12 05:05:26.618003] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 267.658 ms, result 0 00:25:58.072  Copying: 952/1048576 [kB] (952 kBps) Copying: 5356/1048576 [kB] (4404 kBps) Copying: 30/1024 [MB] (25 MBps) Copying: 58/1024 [MB] (28 MBps) Copying: 87/1024 [MB] (28 MBps) Copying: 115/1024 [MB] (28 MBps) Copying: 142/1024 [MB] (27 MBps) Copying: 170/1024 [MB] (27 MBps) Copying: 198/1024 [MB] (27 MBps) Copying: 226/1024 [MB] (28 MBps) Copying: 254/1024 [MB] (28 MBps) Copying: 282/1024 [MB] (27 MBps) Copying: 310/1024 [MB] (27 MBps) Copying: 338/1024 [MB] (28 MBps) Copying: 366/1024 [MB] (28 MBps) Copying: 395/1024 [MB] (28 MBps) Copying: 423/1024 [MB] (28 MBps) Copying: 451/1024 [MB] (28 MBps) Copying: 480/1024 [MB] (28 MBps) Copying: 509/1024 [MB] (28 MBps) Copying: 538/1024 [MB] (29 MBps) Copying: 567/1024 [MB] (28 MBps) Copying: 595/1024 [MB] (28 MBps) Copying: 624/1024 [MB] (28 MBps) Copying: 653/1024 [MB] (28 MBps) Copying: 681/1024 [MB] (28 MBps) Copying: 709/1024 [MB] (27 MBps) Copying: 738/1024 [MB] (28 MBps) Copying: 767/1024 [MB] (28 MBps) Copying: 794/1024 [MB] (27 MBps) Copying: 822/1024 [MB] (27 MBps) Copying: 850/1024 [MB] (28 MBps) Copying: 878/1024 [MB] (27 MBps) Copying: 906/1024 [MB] (27 MBps) Copying: 933/1024 [MB] (27 MBps) Copying: 961/1024 [MB] (27 MBps) Copying: 990/1024 [MB] (28 MBps) Copying: 1018/1024 [MB] (28 MBps) Copying: 1024/1024 [MB] (average 26 MBps)[2024-05-12 05:06:05.169712] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.072 [2024-05-12 05:06:05.170076] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:58.072 [2024-05-12 05:06:05.170246] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:58.072 [2024-05-12 05:06:05.170387] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.072 [2024-05-12 05:06:05.170467] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:58.072 [2024-05-12 05:06:05.173952] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.072 [2024-05-12 05:06:05.174122] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:58.072 [2024-05-12 05:06:05.174163] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.209 ms 00:25:58.072 [2024-05-12 05:06:05.174183] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.072 [2024-05-12 05:06:05.174480] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.072 [2024-05-12 05:06:05.174506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:58.072 [2024-05-12 05:06:05.174518] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:25:58.072 [2024-05-12 05:06:05.174529] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.072 [2024-05-12 05:06:05.185494] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.072 [2024-05-12 05:06:05.185540] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:58.072 [2024-05-12 05:06:05.185559] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.944 ms 00:25:58.072 [2024-05-12 05:06:05.185570] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.072 [2024-05-12 05:06:05.191269] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.072 [2024-05-12 05:06:05.191315] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:25:58.072 [2024-05-12 05:06:05.191351] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.677 ms 00:25:58.072 [2024-05-12 05:06:05.191361] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.332 [2024-05-12 05:06:05.219504] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.332 [2024-05-12 05:06:05.219552] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:58.332 [2024-05-12 05:06:05.219585] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.076 ms 00:25:58.332 [2024-05-12 05:06:05.219595] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.332 [2024-05-12 05:06:05.233751] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.332 [2024-05-12 05:06:05.233786] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:58.332 [2024-05-12 05:06:05.233801] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.116 ms 00:25:58.332 [2024-05-12 05:06:05.233810] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.332 [2024-05-12 05:06:05.237002] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.332 [2024-05-12 05:06:05.237039] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:58.332 [2024-05-12 05:06:05.237070] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.151 ms 00:25:58.332 [2024-05-12 05:06:05.237080] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.332 [2024-05-12 05:06:05.261644] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.332 [2024-05-12 05:06:05.261678] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:58.332 [2024-05-12 05:06:05.261693] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.523 ms 00:25:58.332 [2024-05-12 05:06:05.261702] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.332 [2024-05-12 05:06:05.285897] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.332 [2024-05-12 05:06:05.285931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:58.332 [2024-05-12 05:06:05.285945] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.174 ms 00:25:58.332 [2024-05-12 05:06:05.285954] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.332 [2024-05-12 05:06:05.309885] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.332 [2024-05-12 05:06:05.309919] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:58.332 [2024-05-12 05:06:05.309933] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.909 ms 00:25:58.332 [2024-05-12 05:06:05.309942] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.332 [2024-05-12 05:06:05.334092] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.332 [2024-05-12 05:06:05.334127] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:58.332 [2024-05-12 05:06:05.334141] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.093 ms 00:25:58.332 [2024-05-12 05:06:05.334149] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.332 [2024-05-12 05:06:05.334170] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:58.332 [2024-05-12 05:06:05.334187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:58.332 [2024-05-12 05:06:05.334198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3328 / 261120 wr_cnt: 1 state: open 00:25:58.332 [2024-05-12 05:06:05.334207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:58.332 [2024-05-12 05:06:05.334232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:58.332 [2024-05-12 05:06:05.334245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:58.332 [2024-05-12 05:06:05.334255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:58.332 [2024-05-12 05:06:05.334264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:58.332 [2024-05-12 05:06:05.334273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:58.332 [2024-05-12 05:06:05.334283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:58.332 [2024-05-12 05:06:05.334292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:58.332 [2024-05-12 05:06:05.334301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:58.332 [2024-05-12 05:06:05.334326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:58.332 [2024-05-12 05:06:05.334336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:58.332 [2024-05-12 05:06:05.334346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:58.332 [2024-05-12 05:06:05.334355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:58.332 [2024-05-12 05:06:05.334365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:58.332 [2024-05-12 05:06:05.334375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:58.332 [2024-05-12 05:06:05.334384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:58.332 [2024-05-12 05:06:05.334394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:58.332 [2024-05-12 05:06:05.334404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:58.332 [2024-05-12 05:06:05.334416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:58.332 [2024-05-12 05:06:05.334425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:58.332 [2024-05-12 05:06:05.334435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:58.332 [2024-05-12 05:06:05.334444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:58.332 [2024-05-12 05:06:05.334454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:58.332 [2024-05-12 05:06:05.334464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:58.332 [2024-05-12 05:06:05.334474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:58.332 [2024-05-12 05:06:05.334483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:58.332 [2024-05-12 05:06:05.334493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:58.332 [2024-05-12 05:06:05.334503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:58.332 [2024-05-12 05:06:05.334512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:58.332 [2024-05-12 05:06:05.334522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:58.332 [2024-05-12 05:06:05.334531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:58.332 [2024-05-12 05:06:05.334541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:58.332 [2024-05-12 05:06:05.334566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:58.332 [2024-05-12 05:06:05.334576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:58.332 [2024-05-12 05:06:05.334585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:58.332 [2024-05-12 05:06:05.334627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:58.332 [2024-05-12 05:06:05.334638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:58.332 [2024-05-12 05:06:05.334648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:58.332 [2024-05-12 05:06:05.334658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.334668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.334679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.334689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.334699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.334710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.334720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.334730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.334740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.334750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.334760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.334771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.334781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.334791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.334801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.334811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.334821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.334831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.334842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.334853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.334864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.334874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.334884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.334895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.334905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.334915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.334925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.334935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.334946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.334956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.334966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.334976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.334987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.334997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.335007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.335017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.335027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.335038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.335048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.335058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.335069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.335079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.335089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.335099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.335109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.335120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.335130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.335140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.335151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.335161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.335171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.335182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.335193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.335204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.335215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.335226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.335236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.335246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.335256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.335267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:58.333 [2024-05-12 05:06:05.335293] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:58.333 [2024-05-12 05:06:05.335306] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9c535803-3c77-4cb2-a504-54d4ad3e3ace 00:25:58.333 [2024-05-12 05:06:05.335316] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264448 00:25:58.333 [2024-05-12 05:06:05.335326] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 136128 00:25:58.333 [2024-05-12 05:06:05.335335] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 134144 00:25:58.333 [2024-05-12 05:06:05.335353] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0148 00:25:58.333 [2024-05-12 05:06:05.335362] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:58.333 [2024-05-12 05:06:05.335372] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:58.333 [2024-05-12 05:06:05.335382] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:58.333 [2024-05-12 05:06:05.335391] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:58.333 [2024-05-12 05:06:05.335400] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:58.333 [2024-05-12 05:06:05.335410] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.333 [2024-05-12 05:06:05.335420] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:58.333 [2024-05-12 05:06:05.335430] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.241 ms 00:25:58.333 [2024-05-12 05:06:05.335440] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.333 [2024-05-12 05:06:05.349406] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.333 [2024-05-12 05:06:05.349443] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:58.333 [2024-05-12 05:06:05.349473] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.921 ms 00:25:58.333 [2024-05-12 05:06:05.349483] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.333 [2024-05-12 05:06:05.349693] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.333 [2024-05-12 05:06:05.349708] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:58.333 [2024-05-12 05:06:05.349718] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.173 ms 00:25:58.333 [2024-05-12 05:06:05.349728] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.333 [2024-05-12 05:06:05.384729] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:58.333 [2024-05-12 05:06:05.384765] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:58.333 [2024-05-12 05:06:05.384779] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:58.333 [2024-05-12 05:06:05.384789] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.333 [2024-05-12 05:06:05.384837] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:58.333 [2024-05-12 05:06:05.384849] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:58.333 [2024-05-12 05:06:05.384859] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:58.333 [2024-05-12 05:06:05.384867] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.333 [2024-05-12 05:06:05.384942] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:58.333 [2024-05-12 05:06:05.384959] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:58.333 [2024-05-12 05:06:05.384969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:58.333 [2024-05-12 05:06:05.384978] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.333 [2024-05-12 05:06:05.384996] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:58.333 [2024-05-12 05:06:05.385008] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:58.334 [2024-05-12 05:06:05.385017] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:58.334 [2024-05-12 05:06:05.385025] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.593 [2024-05-12 05:06:05.459595] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:58.593 [2024-05-12 05:06:05.459649] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:58.593 [2024-05-12 05:06:05.459666] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:58.593 [2024-05-12 05:06:05.459675] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.593 [2024-05-12 05:06:05.490244] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:58.593 [2024-05-12 05:06:05.490278] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:58.593 [2024-05-12 05:06:05.490293] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:58.593 [2024-05-12 05:06:05.490302] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.593 [2024-05-12 05:06:05.490372] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:58.593 [2024-05-12 05:06:05.490394] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:58.593 [2024-05-12 05:06:05.490404] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:58.593 [2024-05-12 05:06:05.490413] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.593 [2024-05-12 05:06:05.490457] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:58.593 [2024-05-12 05:06:05.490471] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:58.593 [2024-05-12 05:06:05.490481] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:58.593 [2024-05-12 05:06:05.490489] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.593 [2024-05-12 05:06:05.490584] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:58.593 [2024-05-12 05:06:05.490600] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:58.593 [2024-05-12 05:06:05.490615] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:58.593 [2024-05-12 05:06:05.490626] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.593 [2024-05-12 05:06:05.490664] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:58.593 [2024-05-12 05:06:05.490679] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:58.593 [2024-05-12 05:06:05.490689] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:58.593 [2024-05-12 05:06:05.490698] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.593 [2024-05-12 05:06:05.490735] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:58.593 [2024-05-12 05:06:05.490747] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:58.593 [2024-05-12 05:06:05.490762] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:58.593 [2024-05-12 05:06:05.490771] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.593 [2024-05-12 05:06:05.490814] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:58.593 [2024-05-12 05:06:05.490827] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:58.593 [2024-05-12 05:06:05.490837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:58.593 [2024-05-12 05:06:05.490846] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.593 [2024-05-12 05:06:05.490959] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 321.224 ms, result 0 00:25:59.531 00:25:59.531 00:25:59.531 05:06:06 -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:01.437 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:01.437 05:06:08 -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:01.437 [2024-05-12 05:06:08.247002] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:01.437 [2024-05-12 05:06:08.247165] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78252 ] 00:26:01.437 [2024-05-12 05:06:08.419329] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.695 [2024-05-12 05:06:08.611183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:01.957 [2024-05-12 05:06:08.859407] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:01.957 [2024-05-12 05:06:08.859480] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:01.957 [2024-05-12 05:06:09.008299] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.957 [2024-05-12 05:06:09.008343] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:01.957 [2024-05-12 05:06:09.008377] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:01.957 [2024-05-12 05:06:09.008386] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.957 [2024-05-12 05:06:09.008449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.957 [2024-05-12 05:06:09.008465] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:01.957 [2024-05-12 05:06:09.008476] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:26:01.957 [2024-05-12 05:06:09.008485] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.957 [2024-05-12 05:06:09.008512] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:01.957 [2024-05-12 05:06:09.009402] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:01.957 [2024-05-12 05:06:09.009440] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.957 [2024-05-12 05:06:09.009452] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:01.957 [2024-05-12 05:06:09.009463] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.934 ms 00:26:01.957 [2024-05-12 05:06:09.009486] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.957 [2024-05-12 05:06:09.010560] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:01.957 [2024-05-12 05:06:09.023341] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.957 [2024-05-12 05:06:09.023376] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:01.957 [2024-05-12 05:06:09.023395] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.782 ms 00:26:01.957 [2024-05-12 05:06:09.023404] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.957 [2024-05-12 05:06:09.023461] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.957 [2024-05-12 05:06:09.023477] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:01.957 [2024-05-12 05:06:09.023487] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:26:01.957 [2024-05-12 05:06:09.023496] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.957 [2024-05-12 05:06:09.027542] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.957 [2024-05-12 05:06:09.027575] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:01.957 [2024-05-12 05:06:09.027588] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.992 ms 00:26:01.957 [2024-05-12 05:06:09.027597] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.957 [2024-05-12 05:06:09.027690] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.957 [2024-05-12 05:06:09.027707] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:01.957 [2024-05-12 05:06:09.027717] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:26:01.957 [2024-05-12 05:06:09.027726] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.957 [2024-05-12 05:06:09.027783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.957 [2024-05-12 05:06:09.027802] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:01.957 [2024-05-12 05:06:09.027813] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:01.957 [2024-05-12 05:06:09.027821] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.957 [2024-05-12 05:06:09.027852] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:01.957 [2024-05-12 05:06:09.031448] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.957 [2024-05-12 05:06:09.031480] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:01.957 [2024-05-12 05:06:09.031492] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.607 ms 00:26:01.957 [2024-05-12 05:06:09.031501] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.957 [2024-05-12 05:06:09.031535] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.957 [2024-05-12 05:06:09.031548] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:01.957 [2024-05-12 05:06:09.031557] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:01.957 [2024-05-12 05:06:09.031565] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.957 [2024-05-12 05:06:09.031588] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:01.957 [2024-05-12 05:06:09.031611] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:26:01.957 [2024-05-12 05:06:09.031646] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:01.957 [2024-05-12 05:06:09.031663] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:26:01.957 [2024-05-12 05:06:09.031725] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:26:01.957 [2024-05-12 05:06:09.031737] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:01.957 [2024-05-12 05:06:09.031748] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:26:01.957 [2024-05-12 05:06:09.031759] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:01.957 [2024-05-12 05:06:09.031769] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:01.957 [2024-05-12 05:06:09.031781] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:01.957 [2024-05-12 05:06:09.031790] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:01.957 [2024-05-12 05:06:09.031798] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:26:01.957 [2024-05-12 05:06:09.031806] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:26:01.957 [2024-05-12 05:06:09.031816] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.957 [2024-05-12 05:06:09.031824] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:01.957 [2024-05-12 05:06:09.031834] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:26:01.957 [2024-05-12 05:06:09.031842] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.957 [2024-05-12 05:06:09.031898] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.957 [2024-05-12 05:06:09.031910] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:01.957 [2024-05-12 05:06:09.031921] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:26:01.957 [2024-05-12 05:06:09.031930] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.957 [2024-05-12 05:06:09.032011] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:01.957 [2024-05-12 05:06:09.032027] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:01.957 [2024-05-12 05:06:09.032037] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:01.957 [2024-05-12 05:06:09.032045] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:01.957 [2024-05-12 05:06:09.032054] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:01.957 [2024-05-12 05:06:09.032062] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:01.957 [2024-05-12 05:06:09.032070] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:01.957 [2024-05-12 05:06:09.032079] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:01.957 [2024-05-12 05:06:09.032088] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:01.957 [2024-05-12 05:06:09.032095] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:01.957 [2024-05-12 05:06:09.032103] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:01.957 [2024-05-12 05:06:09.032112] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:01.957 [2024-05-12 05:06:09.032120] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:01.957 [2024-05-12 05:06:09.032127] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:01.957 [2024-05-12 05:06:09.032135] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:26:01.957 [2024-05-12 05:06:09.032143] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:01.957 [2024-05-12 05:06:09.032150] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:01.957 [2024-05-12 05:06:09.032158] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:26:01.957 [2024-05-12 05:06:09.032166] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:01.957 [2024-05-12 05:06:09.032173] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:26:01.957 [2024-05-12 05:06:09.032181] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:26:01.957 [2024-05-12 05:06:09.032201] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:26:01.958 [2024-05-12 05:06:09.032210] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:01.958 [2024-05-12 05:06:09.032260] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:01.958 [2024-05-12 05:06:09.032286] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:26:01.958 [2024-05-12 05:06:09.032295] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:01.958 [2024-05-12 05:06:09.032304] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:26:01.958 [2024-05-12 05:06:09.032312] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:26:01.958 [2024-05-12 05:06:09.032320] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:01.958 [2024-05-12 05:06:09.032328] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:01.958 [2024-05-12 05:06:09.032337] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:26:01.958 [2024-05-12 05:06:09.032345] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:01.958 [2024-05-12 05:06:09.032354] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:26:01.958 [2024-05-12 05:06:09.032362] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:26:01.958 [2024-05-12 05:06:09.032371] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:01.958 [2024-05-12 05:06:09.032379] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:01.958 [2024-05-12 05:06:09.032388] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:01.958 [2024-05-12 05:06:09.032396] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:01.958 [2024-05-12 05:06:09.032405] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:26:01.958 [2024-05-12 05:06:09.032413] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:01.958 [2024-05-12 05:06:09.032422] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:01.958 [2024-05-12 05:06:09.032431] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:01.958 [2024-05-12 05:06:09.032440] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:01.958 [2024-05-12 05:06:09.032449] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:01.958 [2024-05-12 05:06:09.032464] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:01.958 [2024-05-12 05:06:09.032472] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:01.958 [2024-05-12 05:06:09.032481] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:01.958 [2024-05-12 05:06:09.032490] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:01.958 [2024-05-12 05:06:09.032499] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:01.958 [2024-05-12 05:06:09.032507] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:01.958 [2024-05-12 05:06:09.032516] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:01.958 [2024-05-12 05:06:09.032528] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:01.958 [2024-05-12 05:06:09.032538] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:01.958 [2024-05-12 05:06:09.032548] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:26:01.958 [2024-05-12 05:06:09.032558] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:26:01.958 [2024-05-12 05:06:09.032581] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:26:01.958 [2024-05-12 05:06:09.032591] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:26:01.958 [2024-05-12 05:06:09.032599] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:26:01.958 [2024-05-12 05:06:09.032640] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:26:01.958 [2024-05-12 05:06:09.032663] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:26:01.958 [2024-05-12 05:06:09.032673] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:26:01.958 [2024-05-12 05:06:09.032682] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:26:01.958 [2024-05-12 05:06:09.032691] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:26:01.958 [2024-05-12 05:06:09.032700] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:26:01.958 [2024-05-12 05:06:09.032710] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:26:01.958 [2024-05-12 05:06:09.032735] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:01.958 [2024-05-12 05:06:09.032761] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:01.958 [2024-05-12 05:06:09.032771] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:01.958 [2024-05-12 05:06:09.032782] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:01.958 [2024-05-12 05:06:09.032792] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:01.958 [2024-05-12 05:06:09.032802] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:01.958 [2024-05-12 05:06:09.032813] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.958 [2024-05-12 05:06:09.032823] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:01.958 [2024-05-12 05:06:09.032834] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.836 ms 00:26:01.958 [2024-05-12 05:06:09.032845] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.958 [2024-05-12 05:06:09.047412] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.958 [2024-05-12 05:06:09.047449] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:01.958 [2024-05-12 05:06:09.047465] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.518 ms 00:26:01.958 [2024-05-12 05:06:09.047474] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.958 [2024-05-12 05:06:09.047553] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.958 [2024-05-12 05:06:09.047566] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:01.958 [2024-05-12 05:06:09.047581] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:26:01.958 [2024-05-12 05:06:09.047590] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.247 [2024-05-12 05:06:09.091247] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.247 [2024-05-12 05:06:09.091294] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:02.247 [2024-05-12 05:06:09.091311] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.602 ms 00:26:02.247 [2024-05-12 05:06:09.091323] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.247 [2024-05-12 05:06:09.091397] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.247 [2024-05-12 05:06:09.091412] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:02.247 [2024-05-12 05:06:09.091423] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:02.247 [2024-05-12 05:06:09.091432] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.247 [2024-05-12 05:06:09.091763] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.247 [2024-05-12 05:06:09.091781] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:02.247 [2024-05-12 05:06:09.091792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:26:02.247 [2024-05-12 05:06:09.091801] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.247 [2024-05-12 05:06:09.091928] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.247 [2024-05-12 05:06:09.091945] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:02.247 [2024-05-12 05:06:09.091956] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:26:02.247 [2024-05-12 05:06:09.091965] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.247 [2024-05-12 05:06:09.108212] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.247 [2024-05-12 05:06:09.108283] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:02.247 [2024-05-12 05:06:09.108314] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.225 ms 00:26:02.247 [2024-05-12 05:06:09.108325] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.247 [2024-05-12 05:06:09.121279] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:02.247 [2024-05-12 05:06:09.121314] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:02.247 [2024-05-12 05:06:09.121329] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.247 [2024-05-12 05:06:09.121338] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:02.247 [2024-05-12 05:06:09.121349] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.885 ms 00:26:02.247 [2024-05-12 05:06:09.121357] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.247 [2024-05-12 05:06:09.144881] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.247 [2024-05-12 05:06:09.144931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:02.247 [2024-05-12 05:06:09.144962] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.486 ms 00:26:02.247 [2024-05-12 05:06:09.144972] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.247 [2024-05-12 05:06:09.157685] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.247 [2024-05-12 05:06:09.157719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:02.247 [2024-05-12 05:06:09.157749] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.671 ms 00:26:02.247 [2024-05-12 05:06:09.157758] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.247 [2024-05-12 05:06:09.169987] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.247 [2024-05-12 05:06:09.170020] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:02.247 [2024-05-12 05:06:09.170033] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.192 ms 00:26:02.247 [2024-05-12 05:06:09.170042] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.247 [2024-05-12 05:06:09.170445] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.247 [2024-05-12 05:06:09.170467] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:02.247 [2024-05-12 05:06:09.170479] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:26:02.247 [2024-05-12 05:06:09.170488] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.247 [2024-05-12 05:06:09.228057] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.247 [2024-05-12 05:06:09.228113] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:02.247 [2024-05-12 05:06:09.228130] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.548 ms 00:26:02.247 [2024-05-12 05:06:09.228140] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.247 [2024-05-12 05:06:09.238061] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:02.247 [2024-05-12 05:06:09.240011] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.247 [2024-05-12 05:06:09.240040] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:02.247 [2024-05-12 05:06:09.240054] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.816 ms 00:26:02.247 [2024-05-12 05:06:09.240062] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.247 [2024-05-12 05:06:09.240136] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.247 [2024-05-12 05:06:09.240154] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:02.247 [2024-05-12 05:06:09.240165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:02.247 [2024-05-12 05:06:09.240173] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.247 [2024-05-12 05:06:09.240792] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.247 [2024-05-12 05:06:09.240811] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:02.247 [2024-05-12 05:06:09.240823] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.578 ms 00:26:02.247 [2024-05-12 05:06:09.240831] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.247 [2024-05-12 05:06:09.242471] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.247 [2024-05-12 05:06:09.242502] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:26:02.247 [2024-05-12 05:06:09.242519] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.619 ms 00:26:02.247 [2024-05-12 05:06:09.242528] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.247 [2024-05-12 05:06:09.242560] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.247 [2024-05-12 05:06:09.242573] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:02.247 [2024-05-12 05:06:09.242584] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:02.247 [2024-05-12 05:06:09.242592] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.247 [2024-05-12 05:06:09.242674] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:02.247 [2024-05-12 05:06:09.242692] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.247 [2024-05-12 05:06:09.242702] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:02.247 [2024-05-12 05:06:09.242711] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:26:02.247 [2024-05-12 05:06:09.242723] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.247 [2024-05-12 05:06:09.266845] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.247 [2024-05-12 05:06:09.266882] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:02.247 [2024-05-12 05:06:09.266896] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.098 ms 00:26:02.247 [2024-05-12 05:06:09.266905] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.247 [2024-05-12 05:06:09.266976] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.247 [2024-05-12 05:06:09.266998] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:02.247 [2024-05-12 05:06:09.267008] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:26:02.247 [2024-05-12 05:06:09.267017] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.247 [2024-05-12 05:06:09.268415] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 259.482 ms, result 0 00:26:47.464  Copying: 24/1024 [MB] (24 MBps) Copying: 47/1024 [MB] (23 MBps) Copying: 71/1024 [MB] (23 MBps) Copying: 94/1024 [MB] (23 MBps) Copying: 117/1024 [MB] (23 MBps) Copying: 140/1024 [MB] (23 MBps) Copying: 164/1024 [MB] (23 MBps) Copying: 186/1024 [MB] (22 MBps) Copying: 209/1024 [MB] (22 MBps) Copying: 231/1024 [MB] (22 MBps) Copying: 254/1024 [MB] (22 MBps) Copying: 277/1024 [MB] (23 MBps) Copying: 300/1024 [MB] (23 MBps) Copying: 323/1024 [MB] (22 MBps) Copying: 346/1024 [MB] (23 MBps) Copying: 370/1024 [MB] (23 MBps) Copying: 393/1024 [MB] (22 MBps) Copying: 416/1024 [MB] (23 MBps) Copying: 438/1024 [MB] (22 MBps) Copying: 461/1024 [MB] (22 MBps) Copying: 483/1024 [MB] (22 MBps) Copying: 505/1024 [MB] (22 MBps) Copying: 528/1024 [MB] (22 MBps) Copying: 551/1024 [MB] (22 MBps) Copying: 574/1024 [MB] (23 MBps) Copying: 597/1024 [MB] (22 MBps) Copying: 619/1024 [MB] (22 MBps) Copying: 642/1024 [MB] (22 MBps) Copying: 665/1024 [MB] (22 MBps) Copying: 687/1024 [MB] (22 MBps) Copying: 710/1024 [MB] (22 MBps) Copying: 732/1024 [MB] (22 MBps) Copying: 754/1024 [MB] (22 MBps) Copying: 777/1024 [MB] (22 MBps) Copying: 800/1024 [MB] (22 MBps) Copying: 823/1024 [MB] (22 MBps) Copying: 846/1024 [MB] (22 MBps) Copying: 869/1024 [MB] (23 MBps) Copying: 892/1024 [MB] (23 MBps) Copying: 915/1024 [MB] (23 MBps) Copying: 939/1024 [MB] (23 MBps) Copying: 962/1024 [MB] (22 MBps) Copying: 984/1024 [MB] (22 MBps) Copying: 1008/1024 [MB] (23 MBps) Copying: 1024/1024 [MB] (average 22 MBps)[2024-05-12 05:06:54.339422] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.464 [2024-05-12 05:06:54.339684] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:47.464 [2024-05-12 05:06:54.339846] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:47.464 [2024-05-12 05:06:54.340078] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.464 [2024-05-12 05:06:54.340298] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:47.464 [2024-05-12 05:06:54.343603] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.464 [2024-05-12 05:06:54.343758] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:47.464 [2024-05-12 05:06:54.343890] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.048 ms 00:26:47.464 [2024-05-12 05:06:54.343947] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.464 [2024-05-12 05:06:54.344360] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.464 [2024-05-12 05:06:54.344517] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:47.464 [2024-05-12 05:06:54.344655] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:26:47.464 [2024-05-12 05:06:54.344790] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.464 [2024-05-12 05:06:54.347834] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.464 [2024-05-12 05:06:54.347991] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:47.464 [2024-05-12 05:06:54.348110] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.980 ms 00:26:47.464 [2024-05-12 05:06:54.348158] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.464 [2024-05-12 05:06:54.353722] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.464 [2024-05-12 05:06:54.353873] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:26:47.464 [2024-05-12 05:06:54.353994] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.400 ms 00:26:47.464 [2024-05-12 05:06:54.354091] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.464 [2024-05-12 05:06:54.379381] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.464 [2024-05-12 05:06:54.379564] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:47.464 [2024-05-12 05:06:54.379673] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.177 ms 00:26:47.464 [2024-05-12 05:06:54.379771] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.464 [2024-05-12 05:06:54.394800] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.464 [2024-05-12 05:06:54.394980] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:47.464 [2024-05-12 05:06:54.395091] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.968 ms 00:26:47.464 [2024-05-12 05:06:54.395113] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.464 [2024-05-12 05:06:54.398652] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.464 [2024-05-12 05:06:54.398824] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:47.464 [2024-05-12 05:06:54.398954] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.507 ms 00:26:47.464 [2024-05-12 05:06:54.399003] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.464 [2024-05-12 05:06:54.424796] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.464 [2024-05-12 05:06:54.424960] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:26:47.464 [2024-05-12 05:06:54.425084] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.699 ms 00:26:47.464 [2024-05-12 05:06:54.425183] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.464 [2024-05-12 05:06:54.450577] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.464 [2024-05-12 05:06:54.450772] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:26:47.464 [2024-05-12 05:06:54.450880] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.295 ms 00:26:47.464 [2024-05-12 05:06:54.450978] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.464 [2024-05-12 05:06:54.477448] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.464 [2024-05-12 05:06:54.477602] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:47.464 [2024-05-12 05:06:54.477727] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.409 ms 00:26:47.464 [2024-05-12 05:06:54.477842] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.464 [2024-05-12 05:06:54.503770] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.464 [2024-05-12 05:06:54.503953] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:47.464 [2024-05-12 05:06:54.504067] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.739 ms 00:26:47.464 [2024-05-12 05:06:54.504114] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.464 [2024-05-12 05:06:54.504195] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:47.464 [2024-05-12 05:06:54.504387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:47.464 [2024-05-12 05:06:54.504498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3328 / 261120 wr_cnt: 1 state: open 00:26:47.464 [2024-05-12 05:06:54.504750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:47.464 [2024-05-12 05:06:54.504877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:47.464 [2024-05-12 05:06:54.504896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:47.464 [2024-05-12 05:06:54.504906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.504916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.504926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.504936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.504946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.504956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.504965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.504975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.504985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.504995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.505005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.505015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.505024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.505034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.505044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.505054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.505063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.505073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.505083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.505093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.505103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.505112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.505122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.505132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.505156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.505166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.505176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.505185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.505196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.505206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.505216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.505225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.505512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.505616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.505792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.505810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.505820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.505830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.505840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.505849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.505859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.505869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.505881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.505891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.505901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.505911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.505921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.505930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.505940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.505949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.505959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.505969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.505979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.505988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.505998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.506007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.506017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.506027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.506036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.506046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.506056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.506066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.506090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.506099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.506110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.506120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.506129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.506139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.506148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.506157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.506167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.506176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.506186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.506195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.506205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.506215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.506224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.506248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.506259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.506268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.506278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.506287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.506297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:47.465 [2024-05-12 05:06:54.506307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:47.466 [2024-05-12 05:06:54.506316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:47.466 [2024-05-12 05:06:54.506326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:47.466 [2024-05-12 05:06:54.506335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:47.466 [2024-05-12 05:06:54.506344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:47.466 [2024-05-12 05:06:54.506353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:47.466 [2024-05-12 05:06:54.506363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:47.466 [2024-05-12 05:06:54.506372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:47.466 [2024-05-12 05:06:54.506382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:47.466 [2024-05-12 05:06:54.506391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:47.466 [2024-05-12 05:06:54.506401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:47.466 [2024-05-12 05:06:54.506411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:47.466 [2024-05-12 05:06:54.506428] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:47.466 [2024-05-12 05:06:54.506438] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9c535803-3c77-4cb2-a504-54d4ad3e3ace 00:26:47.466 [2024-05-12 05:06:54.506454] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264448 00:26:47.466 [2024-05-12 05:06:54.506463] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:47.466 [2024-05-12 05:06:54.506472] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:47.466 [2024-05-12 05:06:54.506482] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:47.466 [2024-05-12 05:06:54.506490] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:47.466 [2024-05-12 05:06:54.506500] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:47.466 [2024-05-12 05:06:54.506509] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:47.466 [2024-05-12 05:06:54.506517] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:47.466 [2024-05-12 05:06:54.506525] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:47.466 [2024-05-12 05:06:54.506535] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.466 [2024-05-12 05:06:54.506545] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:47.466 [2024-05-12 05:06:54.506556] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.341 ms 00:26:47.466 [2024-05-12 05:06:54.506576] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.466 [2024-05-12 05:06:54.519895] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.466 [2024-05-12 05:06:54.519928] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:47.466 [2024-05-12 05:06:54.519957] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.267 ms 00:26:47.466 [2024-05-12 05:06:54.519966] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.466 [2024-05-12 05:06:54.520189] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.466 [2024-05-12 05:06:54.520221] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:47.466 [2024-05-12 05:06:54.520282] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:26:47.466 [2024-05-12 05:06:54.520295] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.466 [2024-05-12 05:06:54.556018] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.466 [2024-05-12 05:06:54.556057] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:47.466 [2024-05-12 05:06:54.556087] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.466 [2024-05-12 05:06:54.556097] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.466 [2024-05-12 05:06:54.556148] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.466 [2024-05-12 05:06:54.556160] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:47.466 [2024-05-12 05:06:54.556176] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.466 [2024-05-12 05:06:54.556185] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.466 [2024-05-12 05:06:54.556296] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.466 [2024-05-12 05:06:54.556315] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:47.466 [2024-05-12 05:06:54.556342] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.466 [2024-05-12 05:06:54.556368] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.466 [2024-05-12 05:06:54.556406] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.466 [2024-05-12 05:06:54.556429] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:47.466 [2024-05-12 05:06:54.556440] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.466 [2024-05-12 05:06:54.556457] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.726 [2024-05-12 05:06:54.637707] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.726 [2024-05-12 05:06:54.637753] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:47.726 [2024-05-12 05:06:54.637784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.726 [2024-05-12 05:06:54.637806] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.726 [2024-05-12 05:06:54.667745] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.726 [2024-05-12 05:06:54.667780] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:47.726 [2024-05-12 05:06:54.667810] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.726 [2024-05-12 05:06:54.667826] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.726 [2024-05-12 05:06:54.667901] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.726 [2024-05-12 05:06:54.667917] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:47.726 [2024-05-12 05:06:54.667927] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.726 [2024-05-12 05:06:54.667936] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.726 [2024-05-12 05:06:54.667980] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.726 [2024-05-12 05:06:54.667994] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:47.726 [2024-05-12 05:06:54.668004] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.726 [2024-05-12 05:06:54.668012] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.726 [2024-05-12 05:06:54.668173] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.726 [2024-05-12 05:06:54.668191] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:47.726 [2024-05-12 05:06:54.668202] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.726 [2024-05-12 05:06:54.668213] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.726 [2024-05-12 05:06:54.668287] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.726 [2024-05-12 05:06:54.668306] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:47.726 [2024-05-12 05:06:54.668318] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.726 [2024-05-12 05:06:54.668327] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.726 [2024-05-12 05:06:54.668376] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.726 [2024-05-12 05:06:54.668390] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:47.726 [2024-05-12 05:06:54.668401] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.726 [2024-05-12 05:06:54.668411] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.726 [2024-05-12 05:06:54.668457] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.726 [2024-05-12 05:06:54.668471] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:47.726 [2024-05-12 05:06:54.668482] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.726 [2024-05-12 05:06:54.668491] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.726 [2024-05-12 05:06:54.668618] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 329.183 ms, result 0 00:26:48.663 00:26:48.663 00:26:48.663 05:06:55 -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:50.568 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:26:50.568 05:06:57 -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:26:50.568 05:06:57 -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:26:50.568 05:06:57 -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:50.568 05:06:57 -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:50.568 05:06:57 -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:50.568 05:06:57 -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:50.568 05:06:57 -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:50.568 Process with pid 76335 is not found 00:26:50.568 05:06:57 -- ftl/dirty_shutdown.sh@37 -- # killprocess 76335 00:26:50.568 05:06:57 -- common/autotest_common.sh@926 -- # '[' -z 76335 ']' 00:26:50.568 05:06:57 -- common/autotest_common.sh@930 -- # kill -0 76335 00:26:50.568 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (76335) - No such process 00:26:50.568 05:06:57 -- common/autotest_common.sh@953 -- # echo 'Process with pid 76335 is not found' 00:26:50.568 05:06:57 -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:26:50.827 Remove shared memory files 00:26:50.827 05:06:57 -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:26:50.827 05:06:57 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:50.827 05:06:57 -- ftl/common.sh@205 -- # rm -f rm -f 00:26:50.827 05:06:57 -- ftl/common.sh@206 -- # rm -f rm -f 00:26:50.827 05:06:57 -- ftl/common.sh@207 -- # rm -f rm -f 00:26:50.827 05:06:57 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:50.827 05:06:57 -- ftl/common.sh@209 -- # rm -f rm -f 00:26:50.827 00:26:50.827 real 3m54.429s 00:26:50.827 user 4m29.160s 00:26:50.827 sys 0m35.040s 00:26:50.827 05:06:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:50.827 05:06:57 -- common/autotest_common.sh@10 -- # set +x 00:26:50.827 ************************************ 00:26:50.827 END TEST ftl_dirty_shutdown 00:26:50.827 ************************************ 00:26:50.827 05:06:57 -- ftl/ftl.sh@79 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:07.0 0000:00:06.0 00:26:50.827 05:06:57 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:26:50.827 05:06:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:50.827 05:06:57 -- common/autotest_common.sh@10 -- # set +x 00:26:50.827 ************************************ 00:26:50.827 START TEST ftl_upgrade_shutdown 00:26:50.827 ************************************ 00:26:50.827 05:06:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:07.0 0000:00:06.0 00:26:51.086 * Looking for test storage... 00:26:51.086 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:51.086 05:06:57 -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:51.086 05:06:57 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:26:51.086 05:06:57 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:51.086 05:06:57 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:51.086 05:06:57 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:51.086 05:06:57 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:51.086 05:06:57 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:51.086 05:06:57 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:51.086 05:06:57 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:51.086 05:06:57 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:51.086 05:06:57 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:51.086 05:06:57 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:51.086 05:06:58 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:51.086 05:06:58 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:51.086 05:06:58 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:51.086 05:06:58 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:51.086 05:06:58 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:51.086 05:06:58 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:51.086 05:06:58 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:51.086 05:06:58 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:51.086 05:06:58 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:51.086 05:06:58 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:51.086 05:06:58 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:51.086 05:06:58 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:51.086 05:06:58 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:51.086 05:06:58 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:51.086 05:06:58 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:51.086 05:06:58 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:51.086 05:06:58 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:51.086 05:06:58 -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:51.086 05:06:58 -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:26:51.086 05:06:58 -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:26:51.086 05:06:58 -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:07.0 00:26:51.086 05:06:58 -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:07.0 00:26:51.086 05:06:58 -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:26:51.086 05:06:58 -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:26:51.086 05:06:58 -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:06.0 00:26:51.086 05:06:58 -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:06.0 00:26:51.086 05:06:58 -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:26:51.086 05:06:58 -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:26:51.086 05:06:58 -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:26:51.086 05:06:58 -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:26:51.086 05:06:58 -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:26:51.086 05:06:58 -- ftl/common.sh@81 -- # local base_bdev= 00:26:51.086 05:06:58 -- ftl/common.sh@82 -- # local cache_bdev= 00:26:51.086 05:06:58 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:51.086 05:06:58 -- ftl/common.sh@89 -- # spdk_tgt_pid=78807 00:26:51.086 05:06:58 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:51.086 05:06:58 -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:26:51.086 05:06:58 -- ftl/common.sh@91 -- # waitforlisten 78807 00:26:51.086 05:06:58 -- common/autotest_common.sh@819 -- # '[' -z 78807 ']' 00:26:51.086 05:06:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:51.086 05:06:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:51.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:51.086 05:06:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:51.086 05:06:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:51.087 05:06:58 -- common/autotest_common.sh@10 -- # set +x 00:26:51.087 [2024-05-12 05:06:58.122412] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:51.087 [2024-05-12 05:06:58.122570] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78807 ] 00:26:51.345 [2024-05-12 05:06:58.295675] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:51.604 [2024-05-12 05:06:58.519788] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:51.604 [2024-05-12 05:06:58.520065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:52.982 05:06:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:52.982 05:06:59 -- common/autotest_common.sh@852 -- # return 0 00:26:52.982 05:06:59 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:52.982 05:06:59 -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:26:52.982 05:06:59 -- ftl/common.sh@99 -- # local params 00:26:52.982 05:06:59 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:52.982 05:06:59 -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:26:52.982 05:06:59 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:52.982 05:06:59 -- ftl/common.sh@101 -- # [[ -z 0000:00:07.0 ]] 00:26:52.982 05:06:59 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:52.982 05:06:59 -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:26:52.982 05:06:59 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:52.982 05:06:59 -- ftl/common.sh@101 -- # [[ -z 0000:00:06.0 ]] 00:26:52.982 05:06:59 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:52.982 05:06:59 -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:26:52.982 05:06:59 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:52.982 05:06:59 -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:26:52.982 05:06:59 -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:07.0 20480 00:26:52.982 05:06:59 -- ftl/common.sh@54 -- # local name=base 00:26:52.982 05:06:59 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:26:52.982 05:06:59 -- ftl/common.sh@56 -- # local size=20480 00:26:52.982 05:06:59 -- ftl/common.sh@59 -- # local base_bdev 00:26:52.982 05:06:59 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:07.0 00:26:52.982 05:07:00 -- ftl/common.sh@60 -- # base_bdev=basen1 00:26:52.982 05:07:00 -- ftl/common.sh@62 -- # local base_size 00:26:52.982 05:07:00 -- ftl/common.sh@63 -- # get_bdev_size basen1 00:26:52.982 05:07:00 -- common/autotest_common.sh@1357 -- # local bdev_name=basen1 00:26:52.982 05:07:00 -- common/autotest_common.sh@1358 -- # local bdev_info 00:26:52.982 05:07:00 -- common/autotest_common.sh@1359 -- # local bs 00:26:52.982 05:07:00 -- common/autotest_common.sh@1360 -- # local nb 00:26:52.982 05:07:00 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:26:53.241 05:07:00 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:26:53.241 { 00:26:53.241 "name": "basen1", 00:26:53.241 "aliases": [ 00:26:53.242 "d9aad6ca-de05-49b8-86bf-8e70e8c41726" 00:26:53.242 ], 00:26:53.242 "product_name": "NVMe disk", 00:26:53.242 "block_size": 4096, 00:26:53.242 "num_blocks": 1310720, 00:26:53.242 "uuid": "d9aad6ca-de05-49b8-86bf-8e70e8c41726", 00:26:53.242 "assigned_rate_limits": { 00:26:53.242 "rw_ios_per_sec": 0, 00:26:53.242 "rw_mbytes_per_sec": 0, 00:26:53.242 "r_mbytes_per_sec": 0, 00:26:53.242 "w_mbytes_per_sec": 0 00:26:53.242 }, 00:26:53.242 "claimed": true, 00:26:53.242 "claim_type": "read_many_write_one", 00:26:53.242 "zoned": false, 00:26:53.242 "supported_io_types": { 00:26:53.242 "read": true, 00:26:53.242 "write": true, 00:26:53.242 "unmap": true, 00:26:53.242 "write_zeroes": true, 00:26:53.242 "flush": true, 00:26:53.242 "reset": true, 00:26:53.242 "compare": true, 00:26:53.242 "compare_and_write": false, 00:26:53.242 "abort": true, 00:26:53.242 "nvme_admin": true, 00:26:53.242 "nvme_io": true 00:26:53.242 }, 00:26:53.242 "driver_specific": { 00:26:53.242 "nvme": [ 00:26:53.242 { 00:26:53.242 "pci_address": "0000:00:07.0", 00:26:53.242 "trid": { 00:26:53.242 "trtype": "PCIe", 00:26:53.242 "traddr": "0000:00:07.0" 00:26:53.242 }, 00:26:53.242 "ctrlr_data": { 00:26:53.242 "cntlid": 0, 00:26:53.242 "vendor_id": "0x1b36", 00:26:53.242 "model_number": "QEMU NVMe Ctrl", 00:26:53.242 "serial_number": "12341", 00:26:53.242 "firmware_revision": "8.0.0", 00:26:53.242 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:53.242 "oacs": { 00:26:53.242 "security": 0, 00:26:53.242 "format": 1, 00:26:53.242 "firmware": 0, 00:26:53.242 "ns_manage": 1 00:26:53.242 }, 00:26:53.242 "multi_ctrlr": false, 00:26:53.242 "ana_reporting": false 00:26:53.242 }, 00:26:53.242 "vs": { 00:26:53.242 "nvme_version": "1.4" 00:26:53.242 }, 00:26:53.242 "ns_data": { 00:26:53.242 "id": 1, 00:26:53.242 "can_share": false 00:26:53.242 } 00:26:53.242 } 00:26:53.242 ], 00:26:53.242 "mp_policy": "active_passive" 00:26:53.242 } 00:26:53.242 } 00:26:53.242 ]' 00:26:53.242 05:07:00 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:26:53.242 05:07:00 -- common/autotest_common.sh@1362 -- # bs=4096 00:26:53.242 05:07:00 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:26:53.242 05:07:00 -- common/autotest_common.sh@1363 -- # nb=1310720 00:26:53.242 05:07:00 -- common/autotest_common.sh@1366 -- # bdev_size=5120 00:26:53.242 05:07:00 -- common/autotest_common.sh@1367 -- # echo 5120 00:26:53.242 05:07:00 -- ftl/common.sh@63 -- # base_size=5120 00:26:53.242 05:07:00 -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:26:53.242 05:07:00 -- ftl/common.sh@67 -- # clear_lvols 00:26:53.242 05:07:00 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:53.242 05:07:00 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:53.506 05:07:00 -- ftl/common.sh@28 -- # stores=b9a95135-b649-4e6f-9917-6d2d35eaeba5 00:26:53.506 05:07:00 -- ftl/common.sh@29 -- # for lvs in $stores 00:26:53.506 05:07:00 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b9a95135-b649-4e6f-9917-6d2d35eaeba5 00:26:53.780 05:07:00 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:26:54.047 05:07:01 -- ftl/common.sh@68 -- # lvs=7e6f19fb-df8d-4473-8850-939fefff2118 00:26:54.047 05:07:01 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 7e6f19fb-df8d-4473-8850-939fefff2118 00:26:54.306 05:07:01 -- ftl/common.sh@107 -- # base_bdev=302f3fbd-8c42-4c64-bc9f-51daf00b7628 00:26:54.306 05:07:01 -- ftl/common.sh@108 -- # [[ -z 302f3fbd-8c42-4c64-bc9f-51daf00b7628 ]] 00:26:54.306 05:07:01 -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:06.0 302f3fbd-8c42-4c64-bc9f-51daf00b7628 5120 00:26:54.306 05:07:01 -- ftl/common.sh@35 -- # local name=cache 00:26:54.306 05:07:01 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:26:54.306 05:07:01 -- ftl/common.sh@37 -- # local base_bdev=302f3fbd-8c42-4c64-bc9f-51daf00b7628 00:26:54.306 05:07:01 -- ftl/common.sh@38 -- # local cache_size=5120 00:26:54.306 05:07:01 -- ftl/common.sh@41 -- # get_bdev_size 302f3fbd-8c42-4c64-bc9f-51daf00b7628 00:26:54.306 05:07:01 -- common/autotest_common.sh@1357 -- # local bdev_name=302f3fbd-8c42-4c64-bc9f-51daf00b7628 00:26:54.306 05:07:01 -- common/autotest_common.sh@1358 -- # local bdev_info 00:26:54.306 05:07:01 -- common/autotest_common.sh@1359 -- # local bs 00:26:54.306 05:07:01 -- common/autotest_common.sh@1360 -- # local nb 00:26:54.306 05:07:01 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 302f3fbd-8c42-4c64-bc9f-51daf00b7628 00:26:54.564 05:07:01 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:26:54.564 { 00:26:54.564 "name": "302f3fbd-8c42-4c64-bc9f-51daf00b7628", 00:26:54.564 "aliases": [ 00:26:54.564 "lvs/basen1p0" 00:26:54.564 ], 00:26:54.564 "product_name": "Logical Volume", 00:26:54.564 "block_size": 4096, 00:26:54.564 "num_blocks": 5242880, 00:26:54.564 "uuid": "302f3fbd-8c42-4c64-bc9f-51daf00b7628", 00:26:54.564 "assigned_rate_limits": { 00:26:54.564 "rw_ios_per_sec": 0, 00:26:54.564 "rw_mbytes_per_sec": 0, 00:26:54.564 "r_mbytes_per_sec": 0, 00:26:54.564 "w_mbytes_per_sec": 0 00:26:54.564 }, 00:26:54.564 "claimed": false, 00:26:54.564 "zoned": false, 00:26:54.564 "supported_io_types": { 00:26:54.564 "read": true, 00:26:54.564 "write": true, 00:26:54.564 "unmap": true, 00:26:54.564 "write_zeroes": true, 00:26:54.564 "flush": false, 00:26:54.564 "reset": true, 00:26:54.564 "compare": false, 00:26:54.564 "compare_and_write": false, 00:26:54.564 "abort": false, 00:26:54.564 "nvme_admin": false, 00:26:54.564 "nvme_io": false 00:26:54.564 }, 00:26:54.564 "driver_specific": { 00:26:54.564 "lvol": { 00:26:54.564 "lvol_store_uuid": "7e6f19fb-df8d-4473-8850-939fefff2118", 00:26:54.564 "base_bdev": "basen1", 00:26:54.564 "thin_provision": true, 00:26:54.564 "snapshot": false, 00:26:54.564 "clone": false, 00:26:54.564 "esnap_clone": false 00:26:54.564 } 00:26:54.564 } 00:26:54.564 } 00:26:54.564 ]' 00:26:54.564 05:07:01 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:26:54.564 05:07:01 -- common/autotest_common.sh@1362 -- # bs=4096 00:26:54.564 05:07:01 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:26:54.564 05:07:01 -- common/autotest_common.sh@1363 -- # nb=5242880 00:26:54.564 05:07:01 -- common/autotest_common.sh@1366 -- # bdev_size=20480 00:26:54.564 05:07:01 -- common/autotest_common.sh@1367 -- # echo 20480 00:26:54.564 05:07:01 -- ftl/common.sh@41 -- # local base_size=1024 00:26:54.564 05:07:01 -- ftl/common.sh@44 -- # local nvc_bdev 00:26:54.564 05:07:01 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:06.0 00:26:54.823 05:07:01 -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:26:54.823 05:07:01 -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:26:54.823 05:07:01 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:26:55.080 05:07:02 -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:26:55.080 05:07:02 -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:26:55.080 05:07:02 -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 302f3fbd-8c42-4c64-bc9f-51daf00b7628 -c cachen1p0 --l2p_dram_limit 2 00:26:55.340 [2024-05-12 05:07:02.329486] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.340 [2024-05-12 05:07:02.329554] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:55.340 [2024-05-12 05:07:02.329576] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:55.340 [2024-05-12 05:07:02.329587] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.340 [2024-05-12 05:07:02.329668] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.340 [2024-05-12 05:07:02.329685] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:55.340 [2024-05-12 05:07:02.329698] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:26:55.340 [2024-05-12 05:07:02.329708] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.340 [2024-05-12 05:07:02.329737] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:55.340 [2024-05-12 05:07:02.330580] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:55.340 [2024-05-12 05:07:02.330621] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.340 [2024-05-12 05:07:02.330648] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:55.340 [2024-05-12 05:07:02.330662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.888 ms 00:26:55.340 [2024-05-12 05:07:02.330673] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.340 [2024-05-12 05:07:02.330827] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 403c0869-d652-4097-bbc5-81eee09aec87 00:26:55.340 [2024-05-12 05:07:02.331865] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.340 [2024-05-12 05:07:02.331891] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:26:55.340 [2024-05-12 05:07:02.331904] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:26:55.340 [2024-05-12 05:07:02.331916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.340 [2024-05-12 05:07:02.336024] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.340 [2024-05-12 05:07:02.336069] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:55.340 [2024-05-12 05:07:02.336086] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.058 ms 00:26:55.340 [2024-05-12 05:07:02.336098] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.340 [2024-05-12 05:07:02.336153] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.340 [2024-05-12 05:07:02.336171] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:55.340 [2024-05-12 05:07:02.336182] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:26:55.340 [2024-05-12 05:07:02.336196] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.340 [2024-05-12 05:07:02.336313] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.340 [2024-05-12 05:07:02.336336] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:55.340 [2024-05-12 05:07:02.336348] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:26:55.340 [2024-05-12 05:07:02.336360] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.340 [2024-05-12 05:07:02.336399] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:55.340 [2024-05-12 05:07:02.340218] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.340 [2024-05-12 05:07:02.340290] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:55.340 [2024-05-12 05:07:02.340308] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.832 ms 00:26:55.340 [2024-05-12 05:07:02.340319] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.340 [2024-05-12 05:07:02.340356] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.340 [2024-05-12 05:07:02.340370] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:55.340 [2024-05-12 05:07:02.340383] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:55.340 [2024-05-12 05:07:02.340393] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.340 [2024-05-12 05:07:02.340450] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:26:55.340 [2024-05-12 05:07:02.340630] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:26:55.340 [2024-05-12 05:07:02.340656] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:55.340 [2024-05-12 05:07:02.340670] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:26:55.340 [2024-05-12 05:07:02.340685] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:55.340 [2024-05-12 05:07:02.340697] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:55.340 [2024-05-12 05:07:02.340709] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:55.340 [2024-05-12 05:07:02.340720] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:55.340 [2024-05-12 05:07:02.340730] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:26:55.340 [2024-05-12 05:07:02.340744] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:26:55.340 [2024-05-12 05:07:02.340757] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.340 [2024-05-12 05:07:02.340767] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:55.340 [2024-05-12 05:07:02.340779] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.310 ms 00:26:55.340 [2024-05-12 05:07:02.340789] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.340 [2024-05-12 05:07:02.340860] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.340 [2024-05-12 05:07:02.340873] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:55.340 [2024-05-12 05:07:02.340897] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:26:55.340 [2024-05-12 05:07:02.340908] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.340 [2024-05-12 05:07:02.340997] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:55.340 [2024-05-12 05:07:02.341014] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:55.340 [2024-05-12 05:07:02.341027] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:55.340 [2024-05-12 05:07:02.341038] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:55.340 [2024-05-12 05:07:02.341049] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:55.340 [2024-05-12 05:07:02.341059] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:55.340 [2024-05-12 05:07:02.341070] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:55.340 [2024-05-12 05:07:02.341079] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:55.340 [2024-05-12 05:07:02.341090] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:55.340 [2024-05-12 05:07:02.341099] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:55.340 [2024-05-12 05:07:02.341110] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:55.340 [2024-05-12 05:07:02.341119] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:55.340 [2024-05-12 05:07:02.341131] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:55.340 [2024-05-12 05:07:02.341140] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:55.340 [2024-05-12 05:07:02.341152] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:26:55.340 [2024-05-12 05:07:02.341161] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:55.340 [2024-05-12 05:07:02.341173] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:55.340 [2024-05-12 05:07:02.341182] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:26:55.340 [2024-05-12 05:07:02.341193] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:55.340 [2024-05-12 05:07:02.341201] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:26:55.340 [2024-05-12 05:07:02.341212] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:26:55.340 [2024-05-12 05:07:02.341221] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:26:55.340 [2024-05-12 05:07:02.341232] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:55.340 [2024-05-12 05:07:02.341241] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:55.340 [2024-05-12 05:07:02.341285] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:55.340 [2024-05-12 05:07:02.341296] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:55.340 [2024-05-12 05:07:02.341308] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:26:55.340 [2024-05-12 05:07:02.341317] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:55.340 [2024-05-12 05:07:02.341327] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:55.340 [2024-05-12 05:07:02.341336] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:55.340 [2024-05-12 05:07:02.341347] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:55.340 [2024-05-12 05:07:02.341356] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:55.340 [2024-05-12 05:07:02.341368] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:26:55.340 [2024-05-12 05:07:02.341378] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:55.340 [2024-05-12 05:07:02.341388] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:55.340 [2024-05-12 05:07:02.341397] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:55.340 [2024-05-12 05:07:02.341409] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:55.340 [2024-05-12 05:07:02.341418] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:55.341 [2024-05-12 05:07:02.341430] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:26:55.341 [2024-05-12 05:07:02.341439] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:55.341 [2024-05-12 05:07:02.341450] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:55.341 [2024-05-12 05:07:02.341461] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:55.341 [2024-05-12 05:07:02.341472] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:55.341 [2024-05-12 05:07:02.341482] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:55.341 [2024-05-12 05:07:02.341494] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:55.341 [2024-05-12 05:07:02.341503] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:55.341 [2024-05-12 05:07:02.341516] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:55.341 [2024-05-12 05:07:02.341526] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:55.341 [2024-05-12 05:07:02.341539] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:55.341 [2024-05-12 05:07:02.341549] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:55.341 [2024-05-12 05:07:02.341560] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:55.341 [2024-05-12 05:07:02.341573] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:55.341 [2024-05-12 05:07:02.341586] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:55.341 [2024-05-12 05:07:02.341596] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:26:55.341 [2024-05-12 05:07:02.341608] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:26:55.341 [2024-05-12 05:07:02.341618] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:26:55.341 [2024-05-12 05:07:02.341629] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:26:55.341 [2024-05-12 05:07:02.341669] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:26:55.341 [2024-05-12 05:07:02.341681] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:26:55.341 [2024-05-12 05:07:02.341691] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:26:55.341 [2024-05-12 05:07:02.341702] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:26:55.341 [2024-05-12 05:07:02.341712] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:26:55.341 [2024-05-12 05:07:02.341724] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:26:55.341 [2024-05-12 05:07:02.341735] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:26:55.341 [2024-05-12 05:07:02.341750] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:26:55.341 [2024-05-12 05:07:02.341760] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:55.341 [2024-05-12 05:07:02.341776] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:55.341 [2024-05-12 05:07:02.341787] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:55.341 [2024-05-12 05:07:02.341798] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:55.341 [2024-05-12 05:07:02.341809] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:55.341 [2024-05-12 05:07:02.341820] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:55.341 [2024-05-12 05:07:02.341831] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.341 [2024-05-12 05:07:02.341844] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:55.341 [2024-05-12 05:07:02.341854] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.879 ms 00:26:55.341 [2024-05-12 05:07:02.341866] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.341 [2024-05-12 05:07:02.356742] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.341 [2024-05-12 05:07:02.356782] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:55.341 [2024-05-12 05:07:02.356815] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 14.827 ms 00:26:55.341 [2024-05-12 05:07:02.356827] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.341 [2024-05-12 05:07:02.356870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.341 [2024-05-12 05:07:02.356888] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:55.341 [2024-05-12 05:07:02.356899] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:26:55.341 [2024-05-12 05:07:02.356910] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.341 [2024-05-12 05:07:02.387251] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.341 [2024-05-12 05:07:02.387297] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:55.341 [2024-05-12 05:07:02.387328] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 30.286 ms 00:26:55.341 [2024-05-12 05:07:02.387340] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.341 [2024-05-12 05:07:02.387379] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.341 [2024-05-12 05:07:02.387398] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:55.341 [2024-05-12 05:07:02.387409] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:55.341 [2024-05-12 05:07:02.387420] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.341 [2024-05-12 05:07:02.387778] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.341 [2024-05-12 05:07:02.387798] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:55.341 [2024-05-12 05:07:02.387812] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.300 ms 00:26:55.341 [2024-05-12 05:07:02.387824] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.341 [2024-05-12 05:07:02.387867] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.341 [2024-05-12 05:07:02.387898] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:55.341 [2024-05-12 05:07:02.387909] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:26:55.341 [2024-05-12 05:07:02.387920] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.341 [2024-05-12 05:07:02.403309] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.341 [2024-05-12 05:07:02.403350] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:55.341 [2024-05-12 05:07:02.403381] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.366 ms 00:26:55.341 [2024-05-12 05:07:02.403393] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.341 [2024-05-12 05:07:02.414281] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:55.341 [2024-05-12 05:07:02.415087] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.341 [2024-05-12 05:07:02.415119] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:55.341 [2024-05-12 05:07:02.415140] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.600 ms 00:26:55.341 [2024-05-12 05:07:02.415151] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.341 [2024-05-12 05:07:02.439003] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.341 [2024-05-12 05:07:02.439040] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:26:55.341 [2024-05-12 05:07:02.439077] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 23.819 ms 00:26:55.341 [2024-05-12 05:07:02.439088] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.341 [2024-05-12 05:07:02.439138] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] First startup needs to scrub nv cache data region, this may take some time. 00:26:55.341 [2024-05-12 05:07:02.439156] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 4GiB 00:26:57.870 [2024-05-12 05:07:04.794064] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.870 [2024-05-12 05:07:04.794114] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:26:57.870 [2024-05-12 05:07:04.794152] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2354.940 ms 00:26:57.871 [2024-05-12 05:07:04.794163] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.871 [2024-05-12 05:07:04.794298] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.871 [2024-05-12 05:07:04.794318] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:57.871 [2024-05-12 05:07:04.794332] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.083 ms 00:26:57.871 [2024-05-12 05:07:04.794344] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.871 [2024-05-12 05:07:04.820842] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.871 [2024-05-12 05:07:04.820877] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:26:57.871 [2024-05-12 05:07:04.820911] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 26.401 ms 00:26:57.871 [2024-05-12 05:07:04.820922] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.871 [2024-05-12 05:07:04.845275] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.871 [2024-05-12 05:07:04.845309] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:26:57.871 [2024-05-12 05:07:04.845345] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 24.297 ms 00:26:57.871 [2024-05-12 05:07:04.845356] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.871 [2024-05-12 05:07:04.845723] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.871 [2024-05-12 05:07:04.845753] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:57.871 [2024-05-12 05:07:04.845770] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.325 ms 00:26:57.871 [2024-05-12 05:07:04.845780] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.871 [2024-05-12 05:07:04.909442] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.871 [2024-05-12 05:07:04.909478] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:26:57.871 [2024-05-12 05:07:04.909512] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 63.616 ms 00:26:57.871 [2024-05-12 05:07:04.909523] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.871 [2024-05-12 05:07:04.935021] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.871 [2024-05-12 05:07:04.935057] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:26:57.871 [2024-05-12 05:07:04.935091] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 25.451 ms 00:26:57.871 [2024-05-12 05:07:04.935104] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.871 [2024-05-12 05:07:04.936842] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.871 [2024-05-12 05:07:04.936874] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:26:57.871 [2024-05-12 05:07:04.936907] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.685 ms 00:26:57.871 [2024-05-12 05:07:04.936918] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.871 [2024-05-12 05:07:04.961399] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.871 [2024-05-12 05:07:04.961435] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:26:57.871 [2024-05-12 05:07:04.961468] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 24.432 ms 00:26:57.871 [2024-05-12 05:07:04.961485] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.871 [2024-05-12 05:07:04.961535] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.871 [2024-05-12 05:07:04.961551] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:57.871 [2024-05-12 05:07:04.961564] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:57.871 [2024-05-12 05:07:04.961574] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.871 [2024-05-12 05:07:04.961662] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.871 [2024-05-12 05:07:04.961679] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:57.871 [2024-05-12 05:07:04.961707] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:26:57.871 [2024-05-12 05:07:04.961735] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.871 [2024-05-12 05:07:04.963072] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2633.040 ms, result 0 00:26:57.871 { 00:26:57.871 "name": "ftl", 00:26:57.871 "uuid": "403c0869-d652-4097-bbc5-81eee09aec87" 00:26:57.871 } 00:26:57.871 05:07:04 -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:26:58.129 [2024-05-12 05:07:05.205986] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:58.129 05:07:05 -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:26:58.387 05:07:05 -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:26:58.645 [2024-05-12 05:07:05.570643] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:26:58.645 05:07:05 -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:26:58.903 [2024-05-12 05:07:05.823495] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:58.903 05:07:05 -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:26:59.161 05:07:06 -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:26:59.161 05:07:06 -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:26:59.161 05:07:06 -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:26:59.161 05:07:06 -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:26:59.161 05:07:06 -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:26:59.161 05:07:06 -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:26:59.161 05:07:06 -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:26:59.161 05:07:06 -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:26:59.161 05:07:06 -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:26:59.161 05:07:06 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:59.161 05:07:06 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:26:59.161 Fill FTL, iteration 1 00:26:59.161 05:07:06 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:59.161 05:07:06 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:59.161 05:07:06 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:59.161 05:07:06 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:59.162 05:07:06 -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:26:59.162 05:07:06 -- ftl/common.sh@163 -- # spdk_ini_pid=78927 00:26:59.162 05:07:06 -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:26:59.162 05:07:06 -- ftl/common.sh@164 -- # export spdk_ini_pid 00:26:59.162 05:07:06 -- ftl/common.sh@165 -- # waitforlisten 78927 /var/tmp/spdk.tgt.sock 00:26:59.162 05:07:06 -- common/autotest_common.sh@819 -- # '[' -z 78927 ']' 00:26:59.162 05:07:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:26:59.162 05:07:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:59.162 05:07:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:26:59.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:26:59.162 05:07:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:59.162 05:07:06 -- common/autotest_common.sh@10 -- # set +x 00:26:59.162 [2024-05-12 05:07:06.277578] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:59.162 [2024-05-12 05:07:06.277732] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78927 ] 00:26:59.419 [2024-05-12 05:07:06.438191] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.677 [2024-05-12 05:07:06.632105] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:59.677 [2024-05-12 05:07:06.632400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:01.053 05:07:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:01.053 05:07:07 -- common/autotest_common.sh@852 -- # return 0 00:27:01.053 05:07:07 -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:27:01.053 ftln1 00:27:01.053 05:07:08 -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:27:01.053 05:07:08 -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:27:01.312 05:07:08 -- ftl/common.sh@173 -- # echo ']}' 00:27:01.312 05:07:08 -- ftl/common.sh@176 -- # killprocess 78927 00:27:01.312 05:07:08 -- common/autotest_common.sh@926 -- # '[' -z 78927 ']' 00:27:01.312 05:07:08 -- common/autotest_common.sh@930 -- # kill -0 78927 00:27:01.312 05:07:08 -- common/autotest_common.sh@931 -- # uname 00:27:01.312 05:07:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:01.312 05:07:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78927 00:27:01.312 05:07:08 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:27:01.312 05:07:08 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:27:01.312 killing process with pid 78927 00:27:01.312 05:07:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78927' 00:27:01.312 05:07:08 -- common/autotest_common.sh@945 -- # kill 78927 00:27:01.312 05:07:08 -- common/autotest_common.sh@950 -- # wait 78927 00:27:03.215 05:07:09 -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:27:03.215 05:07:09 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:03.215 [2024-05-12 05:07:10.000117] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:03.215 [2024-05-12 05:07:10.000332] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78982 ] 00:27:03.215 [2024-05-12 05:07:10.167525] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:03.215 [2024-05-12 05:07:10.313145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:09.342  Copying: 223/1024 [MB] (223 MBps) Copying: 444/1024 [MB] (221 MBps) Copying: 664/1024 [MB] (220 MBps) Copying: 885/1024 [MB] (221 MBps) Copying: 1024/1024 [MB] (average 221 MBps) 00:27:09.342 00:27:09.342 05:07:16 -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:27:09.342 Calculate MD5 checksum, iteration 1 00:27:09.342 05:07:16 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:27:09.342 05:07:16 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:09.342 05:07:16 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:09.342 05:07:16 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:09.342 05:07:16 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:09.342 05:07:16 -- ftl/common.sh@154 -- # return 0 00:27:09.342 05:07:16 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:09.342 [2024-05-12 05:07:16.311952] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:09.342 [2024-05-12 05:07:16.312115] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79047 ] 00:27:09.601 [2024-05-12 05:07:16.480946] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.601 [2024-05-12 05:07:16.631834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:13.117  Copying: 464/1024 [MB] (464 MBps) Copying: 932/1024 [MB] (468 MBps) Copying: 1024/1024 [MB] (average 464 MBps) 00:27:13.117 00:27:13.117 05:07:20 -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:27:13.117 05:07:20 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:15.025 05:07:21 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:15.026 Fill FTL, iteration 2 00:27:15.026 05:07:21 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=453baa3910cf89bc32815fb9a4488002 00:27:15.026 05:07:21 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:15.026 05:07:21 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:15.026 05:07:21 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:27:15.026 05:07:21 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:15.026 05:07:21 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:15.026 05:07:21 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:15.026 05:07:21 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:15.026 05:07:21 -- ftl/common.sh@154 -- # return 0 00:27:15.026 05:07:21 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:15.026 [2024-05-12 05:07:21.967203] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:15.026 [2024-05-12 05:07:21.967372] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79111 ] 00:27:15.026 [2024-05-12 05:07:22.127275] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.296 [2024-05-12 05:07:22.325385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:21.519  Copying: 223/1024 [MB] (223 MBps) Copying: 441/1024 [MB] (218 MBps) Copying: 655/1024 [MB] (214 MBps) Copying: 874/1024 [MB] (219 MBps) Copying: 1024/1024 [MB] (average 217 MBps) 00:27:21.519 00:27:21.519 Calculate MD5 checksum, iteration 2 00:27:21.519 05:07:28 -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:27:21.519 05:07:28 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:27:21.519 05:07:28 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:21.519 05:07:28 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:21.519 05:07:28 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:21.519 05:07:28 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:21.519 05:07:28 -- ftl/common.sh@154 -- # return 0 00:27:21.519 05:07:28 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:21.519 [2024-05-12 05:07:28.421598] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:21.519 [2024-05-12 05:07:28.421758] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79181 ] 00:27:21.519 [2024-05-12 05:07:28.590039] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:21.778 [2024-05-12 05:07:28.738937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:25.906  Copying: 471/1024 [MB] (471 MBps) Copying: 938/1024 [MB] (467 MBps) Copying: 1024/1024 [MB] (average 468 MBps) 00:27:25.906 00:27:25.906 05:07:32 -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:27:25.906 05:07:32 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:27.806 05:07:34 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:27.806 05:07:34 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=9506d23e1f29adb8b18d8babea9fe276 00:27:27.806 05:07:34 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:27.806 05:07:34 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:27.806 05:07:34 -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:27.806 [2024-05-12 05:07:34.782991] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.806 [2024-05-12 05:07:34.783042] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:27.806 [2024-05-12 05:07:34.783076] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:27.806 [2024-05-12 05:07:34.783086] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.806 [2024-05-12 05:07:34.783119] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.806 [2024-05-12 05:07:34.783134] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:27.806 [2024-05-12 05:07:34.783145] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:27.806 [2024-05-12 05:07:34.783154] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.806 [2024-05-12 05:07:34.783179] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.806 [2024-05-12 05:07:34.783191] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:27.806 [2024-05-12 05:07:34.783206] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:27.806 [2024-05-12 05:07:34.783215] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.806 [2024-05-12 05:07:34.783329] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.300 ms, result 0 00:27:27.806 true 00:27:27.806 05:07:34 -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:28.065 { 00:27:28.065 "name": "ftl", 00:27:28.065 "properties": [ 00:27:28.065 { 00:27:28.065 "name": "superblock_version", 00:27:28.065 "value": 5, 00:27:28.065 "read-only": true 00:27:28.065 }, 00:27:28.065 { 00:27:28.065 "name": "base_device", 00:27:28.065 "bands": [ 00:27:28.065 { 00:27:28.065 "id": 0, 00:27:28.065 "state": "FREE", 00:27:28.065 "validity": 0.0 00:27:28.065 }, 00:27:28.065 { 00:27:28.065 "id": 1, 00:27:28.065 "state": "FREE", 00:27:28.065 "validity": 0.0 00:27:28.065 }, 00:27:28.065 { 00:27:28.065 "id": 2, 00:27:28.065 "state": "FREE", 00:27:28.065 "validity": 0.0 00:27:28.065 }, 00:27:28.065 { 00:27:28.065 "id": 3, 00:27:28.065 "state": "FREE", 00:27:28.065 "validity": 0.0 00:27:28.065 }, 00:27:28.065 { 00:27:28.065 "id": 4, 00:27:28.065 "state": "FREE", 00:27:28.065 "validity": 0.0 00:27:28.065 }, 00:27:28.065 { 00:27:28.065 "id": 5, 00:27:28.065 "state": "FREE", 00:27:28.065 "validity": 0.0 00:27:28.065 }, 00:27:28.065 { 00:27:28.065 "id": 6, 00:27:28.065 "state": "FREE", 00:27:28.065 "validity": 0.0 00:27:28.065 }, 00:27:28.065 { 00:27:28.065 "id": 7, 00:27:28.065 "state": "FREE", 00:27:28.065 "validity": 0.0 00:27:28.065 }, 00:27:28.065 { 00:27:28.065 "id": 8, 00:27:28.065 "state": "FREE", 00:27:28.065 "validity": 0.0 00:27:28.065 }, 00:27:28.065 { 00:27:28.065 "id": 9, 00:27:28.065 "state": "FREE", 00:27:28.065 "validity": 0.0 00:27:28.065 }, 00:27:28.065 { 00:27:28.065 "id": 10, 00:27:28.065 "state": "FREE", 00:27:28.065 "validity": 0.0 00:27:28.065 }, 00:27:28.065 { 00:27:28.065 "id": 11, 00:27:28.065 "state": "FREE", 00:27:28.065 "validity": 0.0 00:27:28.065 }, 00:27:28.065 { 00:27:28.065 "id": 12, 00:27:28.065 "state": "FREE", 00:27:28.065 "validity": 0.0 00:27:28.065 }, 00:27:28.065 { 00:27:28.065 "id": 13, 00:27:28.065 "state": "FREE", 00:27:28.065 "validity": 0.0 00:27:28.065 }, 00:27:28.065 { 00:27:28.065 "id": 14, 00:27:28.065 "state": "FREE", 00:27:28.065 "validity": 0.0 00:27:28.065 }, 00:27:28.065 { 00:27:28.065 "id": 15, 00:27:28.065 "state": "FREE", 00:27:28.065 "validity": 0.0 00:27:28.065 }, 00:27:28.065 { 00:27:28.065 "id": 16, 00:27:28.065 "state": "FREE", 00:27:28.065 "validity": 0.0 00:27:28.065 }, 00:27:28.065 { 00:27:28.065 "id": 17, 00:27:28.065 "state": "FREE", 00:27:28.065 "validity": 0.0 00:27:28.065 } 00:27:28.065 ], 00:27:28.065 "read-only": true 00:27:28.065 }, 00:27:28.065 { 00:27:28.065 "name": "cache_device", 00:27:28.065 "type": "bdev", 00:27:28.065 "chunks": [ 00:27:28.065 { 00:27:28.065 "id": 0, 00:27:28.065 "state": "CLOSED", 00:27:28.065 "utilization": 1.0 00:27:28.065 }, 00:27:28.065 { 00:27:28.065 "id": 1, 00:27:28.065 "state": "CLOSED", 00:27:28.065 "utilization": 1.0 00:27:28.065 }, 00:27:28.065 { 00:27:28.065 "id": 2, 00:27:28.065 "state": "OPEN", 00:27:28.065 "utilization": 0.001953125 00:27:28.065 }, 00:27:28.065 { 00:27:28.065 "id": 3, 00:27:28.065 "state": "OPEN", 00:27:28.065 "utilization": 0.0 00:27:28.065 } 00:27:28.065 ], 00:27:28.065 "read-only": true 00:27:28.065 }, 00:27:28.065 { 00:27:28.065 "name": "verbose_mode", 00:27:28.065 "value": true, 00:27:28.065 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:28.065 }, 00:27:28.065 { 00:27:28.065 "name": "prep_upgrade_on_shutdown", 00:27:28.065 "value": false, 00:27:28.065 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:28.065 } 00:27:28.065 ] 00:27:28.065 } 00:27:28.065 05:07:35 -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:27:28.324 [2024-05-12 05:07:35.211420] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:28.324 [2024-05-12 05:07:35.211483] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:28.324 [2024-05-12 05:07:35.211499] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:28.324 [2024-05-12 05:07:35.211510] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:28.324 [2024-05-12 05:07:35.211541] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:28.324 [2024-05-12 05:07:35.211556] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:28.324 [2024-05-12 05:07:35.211566] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:28.324 [2024-05-12 05:07:35.211575] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:28.324 [2024-05-12 05:07:35.211613] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:28.324 [2024-05-12 05:07:35.211625] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:28.324 [2024-05-12 05:07:35.211635] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:28.324 [2024-05-12 05:07:35.211644] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:28.324 [2024-05-12 05:07:35.211706] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.271 ms, result 0 00:27:28.324 true 00:27:28.324 05:07:35 -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:27:28.324 05:07:35 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:28.324 05:07:35 -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:28.582 05:07:35 -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:27:28.582 05:07:35 -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:27:28.582 05:07:35 -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:28.582 [2024-05-12 05:07:35.700027] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:28.582 [2024-05-12 05:07:35.700090] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:28.582 [2024-05-12 05:07:35.700123] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:28.582 [2024-05-12 05:07:35.700133] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:28.582 [2024-05-12 05:07:35.700164] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:28.582 [2024-05-12 05:07:35.700179] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:28.582 [2024-05-12 05:07:35.700189] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:28.582 [2024-05-12 05:07:35.700198] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:28.582 [2024-05-12 05:07:35.700221] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:28.582 [2024-05-12 05:07:35.700276] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:28.582 [2024-05-12 05:07:35.700289] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:28.582 [2024-05-12 05:07:35.700298] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:28.582 [2024-05-12 05:07:35.700403] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.356 ms, result 0 00:27:28.582 true 00:27:28.841 05:07:35 -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:28.841 { 00:27:28.841 "name": "ftl", 00:27:28.841 "properties": [ 00:27:28.841 { 00:27:28.841 "name": "superblock_version", 00:27:28.841 "value": 5, 00:27:28.841 "read-only": true 00:27:28.841 }, 00:27:28.841 { 00:27:28.841 "name": "base_device", 00:27:28.841 "bands": [ 00:27:28.841 { 00:27:28.841 "id": 0, 00:27:28.841 "state": "FREE", 00:27:28.841 "validity": 0.0 00:27:28.841 }, 00:27:28.841 { 00:27:28.841 "id": 1, 00:27:28.841 "state": "FREE", 00:27:28.841 "validity": 0.0 00:27:28.841 }, 00:27:28.841 { 00:27:28.841 "id": 2, 00:27:28.841 "state": "FREE", 00:27:28.841 "validity": 0.0 00:27:28.841 }, 00:27:28.841 { 00:27:28.841 "id": 3, 00:27:28.841 "state": "FREE", 00:27:28.841 "validity": 0.0 00:27:28.841 }, 00:27:28.841 { 00:27:28.841 "id": 4, 00:27:28.841 "state": "FREE", 00:27:28.841 "validity": 0.0 00:27:28.841 }, 00:27:28.841 { 00:27:28.841 "id": 5, 00:27:28.841 "state": "FREE", 00:27:28.841 "validity": 0.0 00:27:28.841 }, 00:27:28.841 { 00:27:28.841 "id": 6, 00:27:28.841 "state": "FREE", 00:27:28.841 "validity": 0.0 00:27:28.841 }, 00:27:28.841 { 00:27:28.841 "id": 7, 00:27:28.841 "state": "FREE", 00:27:28.841 "validity": 0.0 00:27:28.841 }, 00:27:28.841 { 00:27:28.841 "id": 8, 00:27:28.841 "state": "FREE", 00:27:28.841 "validity": 0.0 00:27:28.841 }, 00:27:28.841 { 00:27:28.841 "id": 9, 00:27:28.841 "state": "FREE", 00:27:28.841 "validity": 0.0 00:27:28.841 }, 00:27:28.841 { 00:27:28.841 "id": 10, 00:27:28.841 "state": "FREE", 00:27:28.841 "validity": 0.0 00:27:28.841 }, 00:27:28.841 { 00:27:28.841 "id": 11, 00:27:28.841 "state": "FREE", 00:27:28.841 "validity": 0.0 00:27:28.841 }, 00:27:28.841 { 00:27:28.841 "id": 12, 00:27:28.841 "state": "FREE", 00:27:28.841 "validity": 0.0 00:27:28.841 }, 00:27:28.841 { 00:27:28.841 "id": 13, 00:27:28.841 "state": "FREE", 00:27:28.841 "validity": 0.0 00:27:28.841 }, 00:27:28.841 { 00:27:28.841 "id": 14, 00:27:28.841 "state": "FREE", 00:27:28.841 "validity": 0.0 00:27:28.841 }, 00:27:28.841 { 00:27:28.841 "id": 15, 00:27:28.841 "state": "FREE", 00:27:28.841 "validity": 0.0 00:27:28.841 }, 00:27:28.841 { 00:27:28.841 "id": 16, 00:27:28.841 "state": "FREE", 00:27:28.841 "validity": 0.0 00:27:28.841 }, 00:27:28.841 { 00:27:28.841 "id": 17, 00:27:28.841 "state": "FREE", 00:27:28.841 "validity": 0.0 00:27:28.841 } 00:27:28.841 ], 00:27:28.841 "read-only": true 00:27:28.841 }, 00:27:28.841 { 00:27:28.841 "name": "cache_device", 00:27:28.841 "type": "bdev", 00:27:28.841 "chunks": [ 00:27:28.841 { 00:27:28.841 "id": 0, 00:27:28.841 "state": "CLOSED", 00:27:28.841 "utilization": 1.0 00:27:28.841 }, 00:27:28.841 { 00:27:28.841 "id": 1, 00:27:28.841 "state": "CLOSED", 00:27:28.841 "utilization": 1.0 00:27:28.841 }, 00:27:28.841 { 00:27:28.841 "id": 2, 00:27:28.841 "state": "OPEN", 00:27:28.841 "utilization": 0.001953125 00:27:28.841 }, 00:27:28.841 { 00:27:28.841 "id": 3, 00:27:28.841 "state": "OPEN", 00:27:28.841 "utilization": 0.0 00:27:28.841 } 00:27:28.841 ], 00:27:28.841 "read-only": true 00:27:28.841 }, 00:27:28.841 { 00:27:28.841 "name": "verbose_mode", 00:27:28.841 "value": true, 00:27:28.841 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:28.841 }, 00:27:28.841 { 00:27:28.841 "name": "prep_upgrade_on_shutdown", 00:27:28.841 "value": true, 00:27:28.841 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:28.841 } 00:27:28.841 ] 00:27:28.841 } 00:27:28.841 05:07:35 -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:27:28.841 05:07:35 -- ftl/common.sh@130 -- # [[ -n 78807 ]] 00:27:28.841 05:07:35 -- ftl/common.sh@131 -- # killprocess 78807 00:27:28.841 05:07:35 -- common/autotest_common.sh@926 -- # '[' -z 78807 ']' 00:27:28.842 05:07:35 -- common/autotest_common.sh@930 -- # kill -0 78807 00:27:28.842 05:07:35 -- common/autotest_common.sh@931 -- # uname 00:27:28.842 05:07:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:28.842 05:07:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78807 00:27:28.842 killing process with pid 78807 00:27:28.842 05:07:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:28.842 05:07:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:28.842 05:07:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78807' 00:27:28.842 05:07:35 -- common/autotest_common.sh@945 -- # kill 78807 00:27:28.842 05:07:35 -- common/autotest_common.sh@950 -- # wait 78807 00:27:29.776 [2024-05-12 05:07:36.670536] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:27:29.776 [2024-05-12 05:07:36.685629] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.776 [2024-05-12 05:07:36.685670] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:29.776 [2024-05-12 05:07:36.685702] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:29.776 [2024-05-12 05:07:36.685713] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.776 [2024-05-12 05:07:36.685739] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:29.776 [2024-05-12 05:07:36.688450] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.776 [2024-05-12 05:07:36.688497] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:29.776 [2024-05-12 05:07:36.688510] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.692 ms 00:27:29.776 [2024-05-12 05:07:36.688526] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.893 [2024-05-12 05:07:44.600678] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:37.893 [2024-05-12 05:07:44.600743] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:37.893 [2024-05-12 05:07:44.600768] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7912.177 ms 00:27:37.893 [2024-05-12 05:07:44.600778] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.893 [2024-05-12 05:07:44.602032] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:37.893 [2024-05-12 05:07:44.602086] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:37.893 [2024-05-12 05:07:44.602102] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.232 ms 00:27:37.893 [2024-05-12 05:07:44.602113] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.893 [2024-05-12 05:07:44.603323] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:37.893 [2024-05-12 05:07:44.603376] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:27:37.893 [2024-05-12 05:07:44.603396] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.156 ms 00:27:37.893 [2024-05-12 05:07:44.603406] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.893 [2024-05-12 05:07:44.613982] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:37.893 [2024-05-12 05:07:44.614067] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:37.893 [2024-05-12 05:07:44.614084] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.512 ms 00:27:37.893 [2024-05-12 05:07:44.614094] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.893 [2024-05-12 05:07:44.621228] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:37.893 [2024-05-12 05:07:44.621290] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:37.893 [2024-05-12 05:07:44.621306] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.093 ms 00:27:37.893 [2024-05-12 05:07:44.621316] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.893 [2024-05-12 05:07:44.621416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:37.894 [2024-05-12 05:07:44.621440] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:37.894 [2024-05-12 05:07:44.621451] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.062 ms 00:27:37.894 [2024-05-12 05:07:44.621460] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.894 [2024-05-12 05:07:44.631455] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:37.894 [2024-05-12 05:07:44.631518] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:27:37.894 [2024-05-12 05:07:44.631532] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.976 ms 00:27:37.894 [2024-05-12 05:07:44.631541] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.894 [2024-05-12 05:07:44.641935] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:37.894 [2024-05-12 05:07:44.641985] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:27:37.894 [2024-05-12 05:07:44.641999] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.356 ms 00:27:37.894 [2024-05-12 05:07:44.642008] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.894 [2024-05-12 05:07:44.652056] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:37.894 [2024-05-12 05:07:44.652104] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:37.894 [2024-05-12 05:07:44.652117] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.011 ms 00:27:37.894 [2024-05-12 05:07:44.652126] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.894 [2024-05-12 05:07:44.662589] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:37.894 [2024-05-12 05:07:44.662640] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:37.894 [2024-05-12 05:07:44.662670] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.384 ms 00:27:37.894 [2024-05-12 05:07:44.662679] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.894 [2024-05-12 05:07:44.662716] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:37.894 [2024-05-12 05:07:44.662736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:37.894 [2024-05-12 05:07:44.662749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:37.894 [2024-05-12 05:07:44.662759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:37.894 [2024-05-12 05:07:44.662770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:37.894 [2024-05-12 05:07:44.662781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:37.894 [2024-05-12 05:07:44.662790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:37.894 [2024-05-12 05:07:44.662800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:37.894 [2024-05-12 05:07:44.662809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:37.894 [2024-05-12 05:07:44.662819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:37.894 [2024-05-12 05:07:44.662829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:37.894 [2024-05-12 05:07:44.662839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:37.894 [2024-05-12 05:07:44.662848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:37.894 [2024-05-12 05:07:44.662858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:37.894 [2024-05-12 05:07:44.662867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:37.894 [2024-05-12 05:07:44.662893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:37.894 [2024-05-12 05:07:44.662919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:37.894 [2024-05-12 05:07:44.662929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:37.894 [2024-05-12 05:07:44.662940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:37.894 [2024-05-12 05:07:44.662952] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:37.894 [2024-05-12 05:07:44.662962] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 403c0869-d652-4097-bbc5-81eee09aec87 00:27:37.894 [2024-05-12 05:07:44.662988] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:37.894 [2024-05-12 05:07:44.662998] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:27:37.894 [2024-05-12 05:07:44.663007] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:27:37.894 [2024-05-12 05:07:44.663018] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:27:37.894 [2024-05-12 05:07:44.663032] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:37.894 [2024-05-12 05:07:44.663043] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:37.894 [2024-05-12 05:07:44.663052] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:37.894 [2024-05-12 05:07:44.663061] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:37.894 [2024-05-12 05:07:44.663070] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:37.894 [2024-05-12 05:07:44.663080] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:37.894 [2024-05-12 05:07:44.663090] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:37.894 [2024-05-12 05:07:44.663101] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.365 ms 00:27:37.894 [2024-05-12 05:07:44.663111] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.894 [2024-05-12 05:07:44.676464] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:37.894 [2024-05-12 05:07:44.676515] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:37.894 [2024-05-12 05:07:44.676536] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 13.316 ms 00:27:37.894 [2024-05-12 05:07:44.676547] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.894 [2024-05-12 05:07:44.676804] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:37.894 [2024-05-12 05:07:44.676831] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:37.894 [2024-05-12 05:07:44.676844] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.217 ms 00:27:37.894 [2024-05-12 05:07:44.676855] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.894 [2024-05-12 05:07:44.721338] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:37.894 [2024-05-12 05:07:44.721400] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:37.894 [2024-05-12 05:07:44.721415] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:37.894 [2024-05-12 05:07:44.721424] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.894 [2024-05-12 05:07:44.721459] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:37.894 [2024-05-12 05:07:44.721474] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:37.894 [2024-05-12 05:07:44.721484] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:37.894 [2024-05-12 05:07:44.721493] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.894 [2024-05-12 05:07:44.721575] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:37.894 [2024-05-12 05:07:44.721592] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:37.894 [2024-05-12 05:07:44.721623] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:37.894 [2024-05-12 05:07:44.721664] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.894 [2024-05-12 05:07:44.721686] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:37.894 [2024-05-12 05:07:44.721698] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:37.894 [2024-05-12 05:07:44.721708] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:37.894 [2024-05-12 05:07:44.721718] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.894 [2024-05-12 05:07:44.800702] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:37.894 [2024-05-12 05:07:44.800763] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:37.894 [2024-05-12 05:07:44.800795] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:37.894 [2024-05-12 05:07:44.800805] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.894 [2024-05-12 05:07:44.830988] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:37.894 [2024-05-12 05:07:44.831022] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:37.894 [2024-05-12 05:07:44.831051] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:37.894 [2024-05-12 05:07:44.831061] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.894 [2024-05-12 05:07:44.831127] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:37.894 [2024-05-12 05:07:44.831143] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:37.894 [2024-05-12 05:07:44.831152] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:37.894 [2024-05-12 05:07:44.831168] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.894 [2024-05-12 05:07:44.831217] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:37.894 [2024-05-12 05:07:44.831247] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:37.894 [2024-05-12 05:07:44.831276] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:37.894 [2024-05-12 05:07:44.831302] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.894 [2024-05-12 05:07:44.831426] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:37.894 [2024-05-12 05:07:44.831444] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:37.894 [2024-05-12 05:07:44.831455] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:37.894 [2024-05-12 05:07:44.831464] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.894 [2024-05-12 05:07:44.831516] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:37.894 [2024-05-12 05:07:44.831533] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:37.894 [2024-05-12 05:07:44.831543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:37.894 [2024-05-12 05:07:44.831553] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.894 [2024-05-12 05:07:44.831594] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:37.894 [2024-05-12 05:07:44.831608] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:37.894 [2024-05-12 05:07:44.831619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:37.894 [2024-05-12 05:07:44.831628] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.894 [2024-05-12 05:07:44.831682] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:37.894 [2024-05-12 05:07:44.831696] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:37.894 [2024-05-12 05:07:44.831707] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:37.894 [2024-05-12 05:07:44.831717] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.894 [2024-05-12 05:07:44.831846] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8146.239 ms, result 0 00:27:42.080 05:07:48 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:42.081 05:07:48 -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:27:42.081 05:07:48 -- ftl/common.sh@81 -- # local base_bdev= 00:27:42.081 05:07:48 -- ftl/common.sh@82 -- # local cache_bdev= 00:27:42.081 05:07:48 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:42.081 05:07:48 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:42.081 05:07:48 -- ftl/common.sh@89 -- # spdk_tgt_pid=79384 00:27:42.081 05:07:48 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:42.081 05:07:48 -- ftl/common.sh@91 -- # waitforlisten 79384 00:27:42.081 05:07:48 -- common/autotest_common.sh@819 -- # '[' -z 79384 ']' 00:27:42.081 05:07:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:42.081 05:07:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:42.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:42.081 05:07:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:42.081 05:07:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:42.081 05:07:48 -- common/autotest_common.sh@10 -- # set +x 00:27:42.081 [2024-05-12 05:07:48.575891] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:42.081 [2024-05-12 05:07:48.576057] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79384 ] 00:27:42.081 [2024-05-12 05:07:48.734703] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:42.081 [2024-05-12 05:07:48.878868] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:42.081 [2024-05-12 05:07:48.879054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:42.647 [2024-05-12 05:07:49.525785] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:42.647 [2024-05-12 05:07:49.525879] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:42.647 [2024-05-12 05:07:49.665449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.647 [2024-05-12 05:07:49.665507] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:42.647 [2024-05-12 05:07:49.665541] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:42.647 [2024-05-12 05:07:49.665552] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.647 [2024-05-12 05:07:49.665619] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.647 [2024-05-12 05:07:49.665644] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:42.647 [2024-05-12 05:07:49.665655] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:27:42.647 [2024-05-12 05:07:49.665666] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.647 [2024-05-12 05:07:49.665701] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:42.647 [2024-05-12 05:07:49.666637] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:42.647 [2024-05-12 05:07:49.666685] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.647 [2024-05-12 05:07:49.666701] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:42.647 [2024-05-12 05:07:49.666713] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.990 ms 00:27:42.647 [2024-05-12 05:07:49.666723] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.647 [2024-05-12 05:07:49.667870] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:42.647 [2024-05-12 05:07:49.681523] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.647 [2024-05-12 05:07:49.681575] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:42.647 [2024-05-12 05:07:49.681607] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 13.654 ms 00:27:42.647 [2024-05-12 05:07:49.681617] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.647 [2024-05-12 05:07:49.681684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.647 [2024-05-12 05:07:49.681703] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:42.648 [2024-05-12 05:07:49.681718] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:27:42.648 [2024-05-12 05:07:49.681727] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.648 [2024-05-12 05:07:49.686137] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.648 [2024-05-12 05:07:49.686187] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:42.648 [2024-05-12 05:07:49.686217] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.307 ms 00:27:42.648 [2024-05-12 05:07:49.686227] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.648 [2024-05-12 05:07:49.686292] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.648 [2024-05-12 05:07:49.686312] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:42.648 [2024-05-12 05:07:49.686323] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:27:42.648 [2024-05-12 05:07:49.686333] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.648 [2024-05-12 05:07:49.686388] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.648 [2024-05-12 05:07:49.686403] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:42.648 [2024-05-12 05:07:49.686414] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:27:42.648 [2024-05-12 05:07:49.686423] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.648 [2024-05-12 05:07:49.686507] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:42.648 [2024-05-12 05:07:49.690431] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.648 [2024-05-12 05:07:49.690479] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:42.648 [2024-05-12 05:07:49.690509] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.951 ms 00:27:42.648 [2024-05-12 05:07:49.690518] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.648 [2024-05-12 05:07:49.690559] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.648 [2024-05-12 05:07:49.690577] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:42.648 [2024-05-12 05:07:49.690588] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:42.648 [2024-05-12 05:07:49.690597] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.648 [2024-05-12 05:07:49.690637] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:42.648 [2024-05-12 05:07:49.690665] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:27:42.648 [2024-05-12 05:07:49.690700] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:42.648 [2024-05-12 05:07:49.690758] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:27:42.648 [2024-05-12 05:07:49.690856] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:27:42.648 [2024-05-12 05:07:49.690871] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:42.648 [2024-05-12 05:07:49.690885] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:27:42.648 [2024-05-12 05:07:49.690899] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:42.648 [2024-05-12 05:07:49.690911] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:42.648 [2024-05-12 05:07:49.690923] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:42.648 [2024-05-12 05:07:49.690933] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:42.648 [2024-05-12 05:07:49.690943] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:27:42.648 [2024-05-12 05:07:49.690958] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:27:42.648 [2024-05-12 05:07:49.690970] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.648 [2024-05-12 05:07:49.690981] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:42.648 [2024-05-12 05:07:49.690996] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.336 ms 00:27:42.648 [2024-05-12 05:07:49.691007] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.648 [2024-05-12 05:07:49.691076] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.648 [2024-05-12 05:07:49.691090] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:42.648 [2024-05-12 05:07:49.691102] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:27:42.648 [2024-05-12 05:07:49.691112] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.648 [2024-05-12 05:07:49.691198] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:42.648 [2024-05-12 05:07:49.691237] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:42.648 [2024-05-12 05:07:49.691252] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:42.648 [2024-05-12 05:07:49.691269] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:42.648 [2024-05-12 05:07:49.691280] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:42.648 [2024-05-12 05:07:49.691290] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:42.648 [2024-05-12 05:07:49.691301] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:42.648 [2024-05-12 05:07:49.691310] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:42.648 [2024-05-12 05:07:49.691321] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:42.648 [2024-05-12 05:07:49.691330] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:42.648 [2024-05-12 05:07:49.691340] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:42.648 [2024-05-12 05:07:49.691350] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:42.648 [2024-05-12 05:07:49.691359] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:42.648 [2024-05-12 05:07:49.691370] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:42.648 [2024-05-12 05:07:49.691380] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:27:42.648 [2024-05-12 05:07:49.691389] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:42.648 [2024-05-12 05:07:49.691400] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:42.648 [2024-05-12 05:07:49.691410] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:27:42.648 [2024-05-12 05:07:49.691419] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:42.648 [2024-05-12 05:07:49.691429] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:27:42.648 [2024-05-12 05:07:49.691439] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:27:42.648 [2024-05-12 05:07:49.691449] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:27:42.648 [2024-05-12 05:07:49.691459] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:42.648 [2024-05-12 05:07:49.691469] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:42.648 [2024-05-12 05:07:49.691479] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:42.648 [2024-05-12 05:07:49.691489] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:42.648 [2024-05-12 05:07:49.691498] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:27:42.648 [2024-05-12 05:07:49.691508] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:42.648 [2024-05-12 05:07:49.691517] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:42.648 [2024-05-12 05:07:49.691527] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:42.648 [2024-05-12 05:07:49.691537] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:42.648 [2024-05-12 05:07:49.691546] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:42.648 [2024-05-12 05:07:49.691556] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:27:42.648 [2024-05-12 05:07:49.691566] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:42.648 [2024-05-12 05:07:49.691575] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:42.648 [2024-05-12 05:07:49.691585] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:42.648 [2024-05-12 05:07:49.691595] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:42.648 [2024-05-12 05:07:49.691604] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:42.648 [2024-05-12 05:07:49.691614] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:27:42.648 [2024-05-12 05:07:49.691624] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:42.648 [2024-05-12 05:07:49.691634] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:42.648 [2024-05-12 05:07:49.691644] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:42.648 [2024-05-12 05:07:49.691655] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:42.648 [2024-05-12 05:07:49.691665] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:42.648 [2024-05-12 05:07:49.691675] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:42.648 [2024-05-12 05:07:49.691686] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:42.648 [2024-05-12 05:07:49.691696] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:42.648 [2024-05-12 05:07:49.691706] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:42.648 [2024-05-12 05:07:49.691715] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:42.648 [2024-05-12 05:07:49.691725] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:42.648 [2024-05-12 05:07:49.691736] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:42.648 [2024-05-12 05:07:49.691749] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:42.648 [2024-05-12 05:07:49.691761] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:42.648 [2024-05-12 05:07:49.691771] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:27:42.648 [2024-05-12 05:07:49.691783] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:27:42.648 [2024-05-12 05:07:49.691794] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:27:42.648 [2024-05-12 05:07:49.691804] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:27:42.648 [2024-05-12 05:07:49.691815] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:27:42.648 [2024-05-12 05:07:49.691825] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:27:42.648 [2024-05-12 05:07:49.691836] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:27:42.648 [2024-05-12 05:07:49.691846] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:27:42.648 [2024-05-12 05:07:49.691857] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:27:42.649 [2024-05-12 05:07:49.691880] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:27:42.649 [2024-05-12 05:07:49.691891] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:27:42.649 [2024-05-12 05:07:49.691902] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:27:42.649 [2024-05-12 05:07:49.691913] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:42.649 [2024-05-12 05:07:49.691925] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:42.649 [2024-05-12 05:07:49.691937] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:42.649 [2024-05-12 05:07:49.691948] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:42.649 [2024-05-12 05:07:49.691958] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:42.649 [2024-05-12 05:07:49.691969] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:42.649 [2024-05-12 05:07:49.691980] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.649 [2024-05-12 05:07:49.691996] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:42.649 [2024-05-12 05:07:49.692007] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.823 ms 00:27:42.649 [2024-05-12 05:07:49.692018] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.649 [2024-05-12 05:07:49.707988] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.649 [2024-05-12 05:07:49.708049] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:42.649 [2024-05-12 05:07:49.708082] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.892 ms 00:27:42.649 [2024-05-12 05:07:49.708093] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.649 [2024-05-12 05:07:49.708141] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.649 [2024-05-12 05:07:49.708155] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:42.649 [2024-05-12 05:07:49.708166] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:27:42.649 [2024-05-12 05:07:49.708176] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.649 [2024-05-12 05:07:49.740075] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.649 [2024-05-12 05:07:49.740135] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:42.649 [2024-05-12 05:07:49.740167] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 31.792 ms 00:27:42.649 [2024-05-12 05:07:49.740181] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.649 [2024-05-12 05:07:49.740257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.649 [2024-05-12 05:07:49.740275] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:42.649 [2024-05-12 05:07:49.740286] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:42.649 [2024-05-12 05:07:49.740296] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.649 [2024-05-12 05:07:49.740707] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.649 [2024-05-12 05:07:49.740735] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:42.649 [2024-05-12 05:07:49.740749] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.347 ms 00:27:42.649 [2024-05-12 05:07:49.740760] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.649 [2024-05-12 05:07:49.740820] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.649 [2024-05-12 05:07:49.740835] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:42.649 [2024-05-12 05:07:49.740847] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:27:42.649 [2024-05-12 05:07:49.740857] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.649 [2024-05-12 05:07:49.756117] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.649 [2024-05-12 05:07:49.756155] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:42.649 [2024-05-12 05:07:49.756186] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.233 ms 00:27:42.649 [2024-05-12 05:07:49.756197] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.649 [2024-05-12 05:07:49.769411] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:42.649 [2024-05-12 05:07:49.769450] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:42.649 [2024-05-12 05:07:49.769481] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.649 [2024-05-12 05:07:49.769492] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:27:42.649 [2024-05-12 05:07:49.769503] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 13.115 ms 00:27:42.649 [2024-05-12 05:07:49.769513] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.906 [2024-05-12 05:07:49.785030] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.906 [2024-05-12 05:07:49.785067] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:27:42.906 [2024-05-12 05:07:49.785110] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.467 ms 00:27:42.906 [2024-05-12 05:07:49.785121] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.906 [2024-05-12 05:07:49.797381] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.906 [2024-05-12 05:07:49.797416] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:27:42.906 [2024-05-12 05:07:49.797446] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.215 ms 00:27:42.906 [2024-05-12 05:07:49.797455] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.906 [2024-05-12 05:07:49.809677] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.906 [2024-05-12 05:07:49.809711] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:27:42.906 [2024-05-12 05:07:49.809741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.179 ms 00:27:42.906 [2024-05-12 05:07:49.809751] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.906 [2024-05-12 05:07:49.810193] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.906 [2024-05-12 05:07:49.810236] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:42.906 [2024-05-12 05:07:49.810252] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.338 ms 00:27:42.906 [2024-05-12 05:07:49.810262] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.906 [2024-05-12 05:07:49.869482] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.907 [2024-05-12 05:07:49.869541] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:42.907 [2024-05-12 05:07:49.869574] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 59.193 ms 00:27:42.907 [2024-05-12 05:07:49.869584] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.907 [2024-05-12 05:07:49.879549] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:42.907 [2024-05-12 05:07:49.880206] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.907 [2024-05-12 05:07:49.880266] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:42.907 [2024-05-12 05:07:49.880282] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.564 ms 00:27:42.907 [2024-05-12 05:07:49.880292] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.907 [2024-05-12 05:07:49.880373] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.907 [2024-05-12 05:07:49.880394] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:27:42.907 [2024-05-12 05:07:49.880406] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:42.907 [2024-05-12 05:07:49.880431] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.907 [2024-05-12 05:07:49.880525] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.907 [2024-05-12 05:07:49.880572] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:42.907 [2024-05-12 05:07:49.880583] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:27:42.907 [2024-05-12 05:07:49.880593] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.907 [2024-05-12 05:07:49.882216] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.907 [2024-05-12 05:07:49.882260] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:27:42.907 [2024-05-12 05:07:49.882293] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.596 ms 00:27:42.907 [2024-05-12 05:07:49.882304] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.907 [2024-05-12 05:07:49.882342] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.907 [2024-05-12 05:07:49.882357] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:42.907 [2024-05-12 05:07:49.882368] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:42.907 [2024-05-12 05:07:49.882377] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.907 [2024-05-12 05:07:49.882417] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:42.907 [2024-05-12 05:07:49.882432] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.907 [2024-05-12 05:07:49.882441] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:42.907 [2024-05-12 05:07:49.882451] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:27:42.907 [2024-05-12 05:07:49.882464] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.907 [2024-05-12 05:07:49.906663] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.907 [2024-05-12 05:07:49.906701] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:42.907 [2024-05-12 05:07:49.906732] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 24.160 ms 00:27:42.907 [2024-05-12 05:07:49.906742] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.907 [2024-05-12 05:07:49.906815] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.907 [2024-05-12 05:07:49.906833] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:42.907 [2024-05-12 05:07:49.906850] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:27:42.907 [2024-05-12 05:07:49.906860] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.907 [2024-05-12 05:07:49.908181] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 242.287 ms, result 0 00:27:42.907 [2024-05-12 05:07:49.923008] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:42.907 [2024-05-12 05:07:49.939010] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:27:42.907 [2024-05-12 05:07:49.947117] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:43.165 05:07:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:43.165 05:07:50 -- common/autotest_common.sh@852 -- # return 0 00:27:43.165 05:07:50 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:43.165 05:07:50 -- ftl/common.sh@95 -- # return 0 00:27:43.165 05:07:50 -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:43.423 [2024-05-12 05:07:50.428406] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.423 [2024-05-12 05:07:50.428474] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:43.423 [2024-05-12 05:07:50.428493] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:43.423 [2024-05-12 05:07:50.428503] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.423 [2024-05-12 05:07:50.428548] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.423 [2024-05-12 05:07:50.428562] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:43.423 [2024-05-12 05:07:50.428573] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:43.423 [2024-05-12 05:07:50.428583] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.423 [2024-05-12 05:07:50.428607] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.423 [2024-05-12 05:07:50.428633] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:43.423 [2024-05-12 05:07:50.428644] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:43.423 [2024-05-12 05:07:50.428653] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.423 [2024-05-12 05:07:50.428740] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.303 ms, result 0 00:27:43.423 true 00:27:43.423 05:07:50 -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:43.682 { 00:27:43.682 "name": "ftl", 00:27:43.682 "properties": [ 00:27:43.682 { 00:27:43.682 "name": "superblock_version", 00:27:43.682 "value": 5, 00:27:43.682 "read-only": true 00:27:43.682 }, 00:27:43.682 { 00:27:43.682 "name": "base_device", 00:27:43.682 "bands": [ 00:27:43.682 { 00:27:43.682 "id": 0, 00:27:43.682 "state": "CLOSED", 00:27:43.682 "validity": 1.0 00:27:43.682 }, 00:27:43.682 { 00:27:43.682 "id": 1, 00:27:43.682 "state": "CLOSED", 00:27:43.682 "validity": 1.0 00:27:43.682 }, 00:27:43.682 { 00:27:43.682 "id": 2, 00:27:43.682 "state": "CLOSED", 00:27:43.682 "validity": 0.007843137254901933 00:27:43.682 }, 00:27:43.682 { 00:27:43.682 "id": 3, 00:27:43.682 "state": "FREE", 00:27:43.682 "validity": 0.0 00:27:43.682 }, 00:27:43.682 { 00:27:43.682 "id": 4, 00:27:43.682 "state": "FREE", 00:27:43.682 "validity": 0.0 00:27:43.682 }, 00:27:43.682 { 00:27:43.682 "id": 5, 00:27:43.682 "state": "FREE", 00:27:43.682 "validity": 0.0 00:27:43.682 }, 00:27:43.682 { 00:27:43.682 "id": 6, 00:27:43.682 "state": "FREE", 00:27:43.682 "validity": 0.0 00:27:43.682 }, 00:27:43.682 { 00:27:43.682 "id": 7, 00:27:43.682 "state": "FREE", 00:27:43.682 "validity": 0.0 00:27:43.682 }, 00:27:43.682 { 00:27:43.682 "id": 8, 00:27:43.682 "state": "FREE", 00:27:43.682 "validity": 0.0 00:27:43.682 }, 00:27:43.682 { 00:27:43.682 "id": 9, 00:27:43.682 "state": "FREE", 00:27:43.682 "validity": 0.0 00:27:43.682 }, 00:27:43.682 { 00:27:43.682 "id": 10, 00:27:43.682 "state": "FREE", 00:27:43.682 "validity": 0.0 00:27:43.682 }, 00:27:43.682 { 00:27:43.682 "id": 11, 00:27:43.682 "state": "FREE", 00:27:43.682 "validity": 0.0 00:27:43.682 }, 00:27:43.682 { 00:27:43.682 "id": 12, 00:27:43.682 "state": "FREE", 00:27:43.682 "validity": 0.0 00:27:43.682 }, 00:27:43.682 { 00:27:43.682 "id": 13, 00:27:43.682 "state": "FREE", 00:27:43.682 "validity": 0.0 00:27:43.682 }, 00:27:43.682 { 00:27:43.682 "id": 14, 00:27:43.682 "state": "FREE", 00:27:43.682 "validity": 0.0 00:27:43.682 }, 00:27:43.682 { 00:27:43.682 "id": 15, 00:27:43.682 "state": "FREE", 00:27:43.682 "validity": 0.0 00:27:43.682 }, 00:27:43.682 { 00:27:43.682 "id": 16, 00:27:43.682 "state": "FREE", 00:27:43.682 "validity": 0.0 00:27:43.682 }, 00:27:43.682 { 00:27:43.682 "id": 17, 00:27:43.682 "state": "FREE", 00:27:43.682 "validity": 0.0 00:27:43.682 } 00:27:43.682 ], 00:27:43.682 "read-only": true 00:27:43.682 }, 00:27:43.682 { 00:27:43.682 "name": "cache_device", 00:27:43.682 "type": "bdev", 00:27:43.682 "chunks": [ 00:27:43.682 { 00:27:43.682 "id": 0, 00:27:43.682 "state": "OPEN", 00:27:43.682 "utilization": 0.0 00:27:43.682 }, 00:27:43.682 { 00:27:43.682 "id": 1, 00:27:43.682 "state": "OPEN", 00:27:43.682 "utilization": 0.0 00:27:43.682 }, 00:27:43.682 { 00:27:43.682 "id": 2, 00:27:43.682 "state": "FREE", 00:27:43.682 "utilization": 0.0 00:27:43.682 }, 00:27:43.682 { 00:27:43.682 "id": 3, 00:27:43.682 "state": "FREE", 00:27:43.682 "utilization": 0.0 00:27:43.682 } 00:27:43.682 ], 00:27:43.682 "read-only": true 00:27:43.682 }, 00:27:43.682 { 00:27:43.682 "name": "verbose_mode", 00:27:43.682 "value": true, 00:27:43.682 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:43.682 }, 00:27:43.682 { 00:27:43.682 "name": "prep_upgrade_on_shutdown", 00:27:43.682 "value": false, 00:27:43.682 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:43.682 } 00:27:43.682 ] 00:27:43.682 } 00:27:43.682 05:07:50 -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:43.682 05:07:50 -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:27:43.682 05:07:50 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:43.942 05:07:50 -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:27:43.942 05:07:50 -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:27:43.942 05:07:50 -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:27:43.942 05:07:50 -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:27:43.942 05:07:50 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:44.202 Validate MD5 checksum, iteration 1 00:27:44.202 05:07:51 -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:27:44.202 05:07:51 -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:27:44.202 05:07:51 -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:27:44.202 05:07:51 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:44.202 05:07:51 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:44.202 05:07:51 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:44.202 05:07:51 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:44.202 05:07:51 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:44.202 05:07:51 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:44.202 05:07:51 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:44.202 05:07:51 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:44.202 05:07:51 -- ftl/common.sh@154 -- # return 0 00:27:44.202 05:07:51 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:44.202 [2024-05-12 05:07:51.211811] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:44.202 [2024-05-12 05:07:51.211955] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79429 ] 00:27:44.461 [2024-05-12 05:07:51.367757] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.461 [2024-05-12 05:07:51.565150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:48.567  Copying: 509/1024 [MB] (509 MBps) Copying: 995/1024 [MB] (486 MBps) Copying: 1024/1024 [MB] (average 497 MBps) 00:27:48.567 00:27:48.567 05:07:55 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:48.567 05:07:55 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:50.469 05:07:57 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:50.469 Validate MD5 checksum, iteration 2 00:27:50.469 05:07:57 -- ftl/upgrade_shutdown.sh@103 -- # sum=453baa3910cf89bc32815fb9a4488002 00:27:50.469 05:07:57 -- ftl/upgrade_shutdown.sh@105 -- # [[ 453baa3910cf89bc32815fb9a4488002 != \4\5\3\b\a\a\3\9\1\0\c\f\8\9\b\c\3\2\8\1\5\f\b\9\a\4\4\8\8\0\0\2 ]] 00:27:50.469 05:07:57 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:50.469 05:07:57 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:50.469 05:07:57 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:50.469 05:07:57 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:50.469 05:07:57 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:50.469 05:07:57 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:50.469 05:07:57 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:50.469 05:07:57 -- ftl/common.sh@154 -- # return 0 00:27:50.469 05:07:57 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:50.469 [2024-05-12 05:07:57.313141] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:50.469 [2024-05-12 05:07:57.313318] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79497 ] 00:27:50.469 [2024-05-12 05:07:57.483927] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:50.728 [2024-05-12 05:07:57.680021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:55.227  Copying: 495/1024 [MB] (495 MBps) Copying: 980/1024 [MB] (485 MBps) Copying: 1024/1024 [MB] (average 490 MBps) 00:27:55.227 00:27:55.227 05:08:02 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:55.227 05:08:02 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:57.128 05:08:03 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:57.128 05:08:03 -- ftl/upgrade_shutdown.sh@103 -- # sum=9506d23e1f29adb8b18d8babea9fe276 00:27:57.128 05:08:03 -- ftl/upgrade_shutdown.sh@105 -- # [[ 9506d23e1f29adb8b18d8babea9fe276 != \9\5\0\6\d\2\3\e\1\f\2\9\a\d\b\8\b\1\8\d\8\b\a\b\e\a\9\f\e\2\7\6 ]] 00:27:57.128 05:08:03 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:57.128 05:08:03 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:57.128 05:08:03 -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:27:57.128 05:08:03 -- ftl/common.sh@137 -- # [[ -n 79384 ]] 00:27:57.128 05:08:03 -- ftl/common.sh@138 -- # kill -9 79384 00:27:57.128 05:08:03 -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:27:57.128 05:08:03 -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:27:57.128 05:08:03 -- ftl/common.sh@81 -- # local base_bdev= 00:27:57.128 05:08:03 -- ftl/common.sh@82 -- # local cache_bdev= 00:27:57.128 05:08:03 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:57.128 05:08:03 -- ftl/common.sh@89 -- # spdk_tgt_pid=79571 00:27:57.128 05:08:03 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:57.128 05:08:03 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:57.128 05:08:03 -- ftl/common.sh@91 -- # waitforlisten 79571 00:27:57.128 05:08:03 -- common/autotest_common.sh@819 -- # '[' -z 79571 ']' 00:27:57.128 05:08:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:57.128 05:08:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:57.128 05:08:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:57.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:57.129 05:08:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:57.129 05:08:03 -- common/autotest_common.sh@10 -- # set +x 00:27:57.129 [2024-05-12 05:08:03.996934] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:57.129 [2024-05-12 05:08:03.997084] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79571 ] 00:27:57.129 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 818: 79384 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:27:57.129 [2024-05-12 05:08:04.157703] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:57.387 [2024-05-12 05:08:04.300432] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:57.387 [2024-05-12 05:08:04.300686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:57.977 [2024-05-12 05:08:04.957960] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:57.977 [2024-05-12 05:08:04.958041] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:57.977 [2024-05-12 05:08:05.095931] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.977 [2024-05-12 05:08:05.095973] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:57.977 [2024-05-12 05:08:05.096006] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:57.977 [2024-05-12 05:08:05.096016] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.977 [2024-05-12 05:08:05.096084] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.977 [2024-05-12 05:08:05.096108] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:57.977 [2024-05-12 05:08:05.096119] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:27:57.977 [2024-05-12 05:08:05.096128] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.977 [2024-05-12 05:08:05.096161] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:57.977 [2024-05-12 05:08:05.097067] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:57.977 [2024-05-12 05:08:05.097116] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.977 [2024-05-12 05:08:05.097131] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:57.977 [2024-05-12 05:08:05.097142] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.960 ms 00:27:57.977 [2024-05-12 05:08:05.097151] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.977 [2024-05-12 05:08:05.097615] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:58.237 [2024-05-12 05:08:05.115251] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.237 [2024-05-12 05:08:05.115311] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:58.237 [2024-05-12 05:08:05.115344] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 17.638 ms 00:27:58.237 [2024-05-12 05:08:05.115354] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.237 [2024-05-12 05:08:05.124751] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.237 [2024-05-12 05:08:05.124788] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:58.237 [2024-05-12 05:08:05.124817] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:27:58.237 [2024-05-12 05:08:05.124827] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.238 [2024-05-12 05:08:05.125272] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.238 [2024-05-12 05:08:05.125300] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:58.238 [2024-05-12 05:08:05.125313] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.359 ms 00:27:58.238 [2024-05-12 05:08:05.125323] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.238 [2024-05-12 05:08:05.125367] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.238 [2024-05-12 05:08:05.125384] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:58.238 [2024-05-12 05:08:05.125394] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:27:58.238 [2024-05-12 05:08:05.125403] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.238 [2024-05-12 05:08:05.125466] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.238 [2024-05-12 05:08:05.125483] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:58.238 [2024-05-12 05:08:05.125494] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:27:58.238 [2024-05-12 05:08:05.125503] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.238 [2024-05-12 05:08:05.125530] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:58.238 [2024-05-12 05:08:05.128856] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.238 [2024-05-12 05:08:05.128889] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:58.238 [2024-05-12 05:08:05.128917] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.335 ms 00:27:58.238 [2024-05-12 05:08:05.128926] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.238 [2024-05-12 05:08:05.128963] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.238 [2024-05-12 05:08:05.128977] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:58.238 [2024-05-12 05:08:05.128988] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:58.238 [2024-05-12 05:08:05.128997] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.238 [2024-05-12 05:08:05.129033] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:58.238 [2024-05-12 05:08:05.129061] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:27:58.238 [2024-05-12 05:08:05.129094] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:58.238 [2024-05-12 05:08:05.129118] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:27:58.238 [2024-05-12 05:08:05.129224] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:27:58.238 [2024-05-12 05:08:05.129238] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:58.238 [2024-05-12 05:08:05.129251] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:27:58.238 [2024-05-12 05:08:05.129278] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:58.238 [2024-05-12 05:08:05.129300] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:58.238 [2024-05-12 05:08:05.129311] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:58.238 [2024-05-12 05:08:05.129320] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:58.238 [2024-05-12 05:08:05.129329] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:27:58.238 [2024-05-12 05:08:05.129338] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:27:58.238 [2024-05-12 05:08:05.129348] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.238 [2024-05-12 05:08:05.129365] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:58.238 [2024-05-12 05:08:05.129374] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.317 ms 00:27:58.238 [2024-05-12 05:08:05.129384] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.238 [2024-05-12 05:08:05.129448] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.238 [2024-05-12 05:08:05.129461] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:58.238 [2024-05-12 05:08:05.129474] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:27:58.238 [2024-05-12 05:08:05.129483] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.238 [2024-05-12 05:08:05.129559] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:58.238 [2024-05-12 05:08:05.129573] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:58.238 [2024-05-12 05:08:05.129584] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:58.238 [2024-05-12 05:08:05.129593] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:58.238 [2024-05-12 05:08:05.129603] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:58.238 [2024-05-12 05:08:05.129612] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:58.238 [2024-05-12 05:08:05.129623] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:58.238 [2024-05-12 05:08:05.129632] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:58.238 [2024-05-12 05:08:05.129640] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:58.238 [2024-05-12 05:08:05.129649] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:58.238 [2024-05-12 05:08:05.129657] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:58.238 [2024-05-12 05:08:05.129666] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:58.238 [2024-05-12 05:08:05.129674] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:58.238 [2024-05-12 05:08:05.129683] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:58.238 [2024-05-12 05:08:05.129691] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:27:58.238 [2024-05-12 05:08:05.129699] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:58.238 [2024-05-12 05:08:05.129708] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:58.238 [2024-05-12 05:08:05.129716] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:27:58.238 [2024-05-12 05:08:05.129725] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:58.238 [2024-05-12 05:08:05.129733] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:27:58.238 [2024-05-12 05:08:05.129741] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:27:58.238 [2024-05-12 05:08:05.129750] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:27:58.238 [2024-05-12 05:08:05.129758] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:58.238 [2024-05-12 05:08:05.129767] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:58.238 [2024-05-12 05:08:05.129775] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:58.238 [2024-05-12 05:08:05.129784] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:58.238 [2024-05-12 05:08:05.129793] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:27:58.238 [2024-05-12 05:08:05.129801] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:58.238 [2024-05-12 05:08:05.129809] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:58.238 [2024-05-12 05:08:05.129818] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:58.238 [2024-05-12 05:08:05.129826] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:58.238 [2024-05-12 05:08:05.129835] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:58.239 [2024-05-12 05:08:05.129843] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:27:58.239 [2024-05-12 05:08:05.129851] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:58.239 [2024-05-12 05:08:05.129860] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:58.239 [2024-05-12 05:08:05.129868] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:58.239 [2024-05-12 05:08:05.129876] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:58.239 [2024-05-12 05:08:05.129885] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:58.239 [2024-05-12 05:08:05.129896] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:27:58.239 [2024-05-12 05:08:05.129905] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:58.239 [2024-05-12 05:08:05.129914] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:58.239 [2024-05-12 05:08:05.129923] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:58.239 [2024-05-12 05:08:05.129932] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:58.239 [2024-05-12 05:08:05.129946] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:58.239 [2024-05-12 05:08:05.129956] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:58.239 [2024-05-12 05:08:05.129965] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:58.239 [2024-05-12 05:08:05.129974] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:58.239 [2024-05-12 05:08:05.129982] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:58.239 [2024-05-12 05:08:05.129991] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:58.239 [2024-05-12 05:08:05.129999] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:58.239 [2024-05-12 05:08:05.130009] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:58.239 [2024-05-12 05:08:05.130021] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:58.239 [2024-05-12 05:08:05.130032] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:58.239 [2024-05-12 05:08:05.130042] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:27:58.239 [2024-05-12 05:08:05.130052] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:27:58.239 [2024-05-12 05:08:05.130062] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:27:58.239 [2024-05-12 05:08:05.130072] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:27:58.239 [2024-05-12 05:08:05.130082] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:27:58.239 [2024-05-12 05:08:05.130091] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:27:58.239 [2024-05-12 05:08:05.130101] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:27:58.239 [2024-05-12 05:08:05.130125] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:27:58.239 [2024-05-12 05:08:05.130135] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:27:58.239 [2024-05-12 05:08:05.130145] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:27:58.239 [2024-05-12 05:08:05.130155] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:27:58.239 [2024-05-12 05:08:05.130165] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:27:58.239 [2024-05-12 05:08:05.130175] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:58.239 [2024-05-12 05:08:05.130185] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:58.239 [2024-05-12 05:08:05.130196] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:58.239 [2024-05-12 05:08:05.130206] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:58.239 [2024-05-12 05:08:05.130235] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:58.239 [2024-05-12 05:08:05.130268] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:58.239 [2024-05-12 05:08:05.130280] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.239 [2024-05-12 05:08:05.130291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:58.239 [2024-05-12 05:08:05.130302] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.759 ms 00:27:58.239 [2024-05-12 05:08:05.130311] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.239 [2024-05-12 05:08:05.144535] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.239 [2024-05-12 05:08:05.144590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:58.239 [2024-05-12 05:08:05.144606] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 14.170 ms 00:27:58.239 [2024-05-12 05:08:05.144631] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.239 [2024-05-12 05:08:05.144685] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.239 [2024-05-12 05:08:05.144703] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:58.239 [2024-05-12 05:08:05.144713] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:27:58.239 [2024-05-12 05:08:05.144723] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.239 [2024-05-12 05:08:05.175046] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.239 [2024-05-12 05:08:05.175090] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:58.239 [2024-05-12 05:08:05.175121] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 30.267 ms 00:27:58.239 [2024-05-12 05:08:05.175130] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.239 [2024-05-12 05:08:05.175177] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.239 [2024-05-12 05:08:05.175192] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:58.239 [2024-05-12 05:08:05.175202] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:58.239 [2024-05-12 05:08:05.175216] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.239 [2024-05-12 05:08:05.175343] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.239 [2024-05-12 05:08:05.175375] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:58.239 [2024-05-12 05:08:05.175403] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.059 ms 00:27:58.239 [2024-05-12 05:08:05.175413] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.239 [2024-05-12 05:08:05.175461] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.239 [2024-05-12 05:08:05.175478] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:58.239 [2024-05-12 05:08:05.175489] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:27:58.239 [2024-05-12 05:08:05.175498] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.239 [2024-05-12 05:08:05.190635] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.239 [2024-05-12 05:08:05.190690] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:58.239 [2024-05-12 05:08:05.190706] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.108 ms 00:27:58.239 [2024-05-12 05:08:05.190720] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.240 [2024-05-12 05:08:05.190849] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.240 [2024-05-12 05:08:05.190868] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:27:58.240 [2024-05-12 05:08:05.190880] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:58.240 [2024-05-12 05:08:05.190889] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.240 [2024-05-12 05:08:05.207761] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.240 [2024-05-12 05:08:05.207815] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:27:58.240 [2024-05-12 05:08:05.207858] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 16.837 ms 00:27:58.240 [2024-05-12 05:08:05.207869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.240 [2024-05-12 05:08:05.217548] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.240 [2024-05-12 05:08:05.217583] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:58.240 [2024-05-12 05:08:05.217617] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.262 ms 00:27:58.240 [2024-05-12 05:08:05.217627] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.240 [2024-05-12 05:08:05.277568] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.240 [2024-05-12 05:08:05.277703] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:58.240 [2024-05-12 05:08:05.277736] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 59.879 ms 00:27:58.240 [2024-05-12 05:08:05.277746] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.240 [2024-05-12 05:08:05.277844] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:27:58.240 [2024-05-12 05:08:05.277891] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:27:58.240 [2024-05-12 05:08:05.277930] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:27:58.240 [2024-05-12 05:08:05.277968] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:27:58.240 [2024-05-12 05:08:05.278011] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.240 [2024-05-12 05:08:05.278022] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:27:58.240 [2024-05-12 05:08:05.278033] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.190 ms 00:27:58.240 [2024-05-12 05:08:05.278043] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.240 [2024-05-12 05:08:05.278124] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:27:58.240 [2024-05-12 05:08:05.278143] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.240 [2024-05-12 05:08:05.278158] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:27:58.240 [2024-05-12 05:08:05.278169] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:27:58.240 [2024-05-12 05:08:05.278179] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.240 [2024-05-12 05:08:05.294596] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.240 [2024-05-12 05:08:05.294655] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:27:58.240 [2024-05-12 05:08:05.294686] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 16.389 ms 00:27:58.240 [2024-05-12 05:08:05.294696] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.240 [2024-05-12 05:08:05.304181] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.240 [2024-05-12 05:08:05.304277] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:27:58.240 [2024-05-12 05:08:05.304295] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:27:58.240 [2024-05-12 05:08:05.304306] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.240 [2024-05-12 05:08:05.304372] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.240 [2024-05-12 05:08:05.304389] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover unmap map 00:27:58.240 [2024-05-12 05:08:05.304407] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:58.240 [2024-05-12 05:08:05.304417] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.240 [2024-05-12 05:08:05.304632] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 8032, seq id 14 00:27:58.808 [2024-05-12 05:08:05.873269] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 8032, seq id 14 00:27:58.808 [2024-05-12 05:08:05.873544] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 270176, seq id 15 00:27:59.375 [2024-05-12 05:08:06.448055] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 270176, seq id 15 00:27:59.375 [2024-05-12 05:08:06.448199] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:59.375 [2024-05-12 05:08:06.448261] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:59.375 [2024-05-12 05:08:06.448279] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.376 [2024-05-12 05:08:06.448305] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:27:59.376 [2024-05-12 05:08:06.448321] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1143.842 ms 00:27:59.376 [2024-05-12 05:08:06.448333] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.376 [2024-05-12 05:08:06.448378] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.376 [2024-05-12 05:08:06.448392] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:27:59.376 [2024-05-12 05:08:06.448404] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:59.376 [2024-05-12 05:08:06.448426] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.376 [2024-05-12 05:08:06.458592] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:59.376 [2024-05-12 05:08:06.458745] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.376 [2024-05-12 05:08:06.458763] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:59.376 [2024-05-12 05:08:06.458774] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.297 ms 00:27:59.376 [2024-05-12 05:08:06.458783] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.376 [2024-05-12 05:08:06.459465] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.376 [2024-05-12 05:08:06.459508] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from SHM 00:27:59.376 [2024-05-12 05:08:06.459521] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.604 ms 00:27:59.376 [2024-05-12 05:08:06.459536] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.376 [2024-05-12 05:08:06.461757] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.376 [2024-05-12 05:08:06.461784] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:27:59.376 [2024-05-12 05:08:06.461811] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.198 ms 00:27:59.376 [2024-05-12 05:08:06.461820] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.376 [2024-05-12 05:08:06.486363] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.376 [2024-05-12 05:08:06.486400] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Complete unmap transaction 00:27:59.376 [2024-05-12 05:08:06.486430] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 24.516 ms 00:27:59.376 [2024-05-12 05:08:06.486446] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.376 [2024-05-12 05:08:06.486550] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.376 [2024-05-12 05:08:06.486569] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:59.376 [2024-05-12 05:08:06.486580] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:27:59.376 [2024-05-12 05:08:06.486589] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.376 [2024-05-12 05:08:06.488203] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.376 [2024-05-12 05:08:06.488271] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:27:59.376 [2024-05-12 05:08:06.488316] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.594 ms 00:27:59.376 [2024-05-12 05:08:06.488326] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.376 [2024-05-12 05:08:06.488370] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.376 [2024-05-12 05:08:06.488385] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:59.376 [2024-05-12 05:08:06.488396] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:59.376 [2024-05-12 05:08:06.488406] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.376 [2024-05-12 05:08:06.488462] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:59.376 [2024-05-12 05:08:06.488479] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.376 [2024-05-12 05:08:06.488489] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:59.376 [2024-05-12 05:08:06.488500] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:27:59.376 [2024-05-12 05:08:06.488510] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.376 [2024-05-12 05:08:06.488601] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.376 [2024-05-12 05:08:06.488616] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:59.376 [2024-05-12 05:08:06.488642] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:27:59.376 [2024-05-12 05:08:06.488652] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.376 [2024-05-12 05:08:06.489749] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1393.268 ms, result 0 00:27:59.634 [2024-05-12 05:08:06.502818] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:59.634 [2024-05-12 05:08:06.518727] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:27:59.634 [2024-05-12 05:08:06.526835] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:59.891 05:08:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:59.891 05:08:07 -- common/autotest_common.sh@852 -- # return 0 00:27:59.891 05:08:07 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:59.891 05:08:07 -- ftl/common.sh@95 -- # return 0 00:27:59.891 05:08:07 -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:27:59.891 05:08:07 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:59.891 05:08:07 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:59.892 05:08:07 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:59.892 Validate MD5 checksum, iteration 1 00:27:59.892 05:08:07 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:59.892 05:08:07 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:59.892 05:08:07 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:59.892 05:08:07 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:59.892 05:08:07 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:59.892 05:08:07 -- ftl/common.sh@154 -- # return 0 00:27:59.892 05:08:07 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:00.149 [2024-05-12 05:08:07.077950] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:00.149 [2024-05-12 05:08:07.078095] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79610 ] 00:28:00.149 [2024-05-12 05:08:07.228485] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:00.407 [2024-05-12 05:08:07.382064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:04.548  Copying: 500/1024 [MB] (500 MBps) Copying: 990/1024 [MB] (490 MBps) Copying: 1024/1024 [MB] (average 494 MBps) 00:28:04.548 00:28:04.548 05:08:11 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:04.548 05:08:11 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:06.450 05:08:13 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:06.450 Validate MD5 checksum, iteration 2 00:28:06.450 05:08:13 -- ftl/upgrade_shutdown.sh@103 -- # sum=453baa3910cf89bc32815fb9a4488002 00:28:06.450 05:08:13 -- ftl/upgrade_shutdown.sh@105 -- # [[ 453baa3910cf89bc32815fb9a4488002 != \4\5\3\b\a\a\3\9\1\0\c\f\8\9\b\c\3\2\8\1\5\f\b\9\a\4\4\8\8\0\0\2 ]] 00:28:06.450 05:08:13 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:06.450 05:08:13 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:06.450 05:08:13 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:06.450 05:08:13 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:06.450 05:08:13 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:06.450 05:08:13 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:06.450 05:08:13 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:06.450 05:08:13 -- ftl/common.sh@154 -- # return 0 00:28:06.450 05:08:13 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:06.450 [2024-05-12 05:08:13.221998] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:06.450 [2024-05-12 05:08:13.222860] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79678 ] 00:28:06.450 [2024-05-12 05:08:13.399657] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:06.709 [2024-05-12 05:08:13.594725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:10.216  Copying: 499/1024 [MB] (499 MBps) Copying: 993/1024 [MB] (494 MBps) Copying: 1024/1024 [MB] (average 495 MBps) 00:28:10.217 00:28:10.217 05:08:17 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:10.217 05:08:17 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:12.119 05:08:19 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:12.119 05:08:19 -- ftl/upgrade_shutdown.sh@103 -- # sum=9506d23e1f29adb8b18d8babea9fe276 00:28:12.120 05:08:19 -- ftl/upgrade_shutdown.sh@105 -- # [[ 9506d23e1f29adb8b18d8babea9fe276 != \9\5\0\6\d\2\3\e\1\f\2\9\a\d\b\8\b\1\8\d\8\b\a\b\e\a\9\f\e\2\7\6 ]] 00:28:12.120 05:08:19 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:12.120 05:08:19 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:12.120 05:08:19 -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:28:12.120 05:08:19 -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:28:12.120 05:08:19 -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:28:12.120 05:08:19 -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:12.120 05:08:19 -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:28:12.120 05:08:19 -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:28:12.120 05:08:19 -- ftl/common.sh@193 -- # tcp_target_cleanup 00:28:12.120 05:08:19 -- ftl/common.sh@144 -- # tcp_target_shutdown 00:28:12.120 05:08:19 -- ftl/common.sh@130 -- # [[ -n 79571 ]] 00:28:12.120 05:08:19 -- ftl/common.sh@131 -- # killprocess 79571 00:28:12.120 05:08:19 -- common/autotest_common.sh@926 -- # '[' -z 79571 ']' 00:28:12.120 05:08:19 -- common/autotest_common.sh@930 -- # kill -0 79571 00:28:12.120 05:08:19 -- common/autotest_common.sh@931 -- # uname 00:28:12.120 05:08:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:12.120 05:08:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 79571 00:28:12.379 killing process with pid 79571 00:28:12.379 05:08:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:12.379 05:08:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:12.379 05:08:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 79571' 00:28:12.379 05:08:19 -- common/autotest_common.sh@945 -- # kill 79571 00:28:12.379 05:08:19 -- common/autotest_common.sh@950 -- # wait 79571 00:28:12.947 [2024-05-12 05:08:19.982259] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:28:12.947 [2024-05-12 05:08:19.997645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.947 [2024-05-12 05:08:19.997687] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:12.947 [2024-05-12 05:08:19.997720] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:12.947 [2024-05-12 05:08:19.997730] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.947 [2024-05-12 05:08:19.997755] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:12.947 [2024-05-12 05:08:20.000496] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.947 [2024-05-12 05:08:20.000561] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:12.947 [2024-05-12 05:08:20.000595] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.723 ms 00:28:12.947 [2024-05-12 05:08:20.000604] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.947 [2024-05-12 05:08:20.000928] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.947 [2024-05-12 05:08:20.000951] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:12.947 [2024-05-12 05:08:20.000963] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.267 ms 00:28:12.947 [2024-05-12 05:08:20.000985] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.947 [2024-05-12 05:08:20.002315] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.947 [2024-05-12 05:08:20.002367] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:12.947 [2024-05-12 05:08:20.002398] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.309 ms 00:28:12.947 [2024-05-12 05:08:20.002408] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.947 [2024-05-12 05:08:20.003598] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.947 [2024-05-12 05:08:20.003642] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:28:12.947 [2024-05-12 05:08:20.003671] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.126 ms 00:28:12.947 [2024-05-12 05:08:20.003680] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.947 [2024-05-12 05:08:20.014873] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.947 [2024-05-12 05:08:20.014911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:12.947 [2024-05-12 05:08:20.014942] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.143 ms 00:28:12.947 [2024-05-12 05:08:20.014959] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.947 [2024-05-12 05:08:20.021571] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.947 [2024-05-12 05:08:20.021640] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:12.947 [2024-05-12 05:08:20.021672] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 6.572 ms 00:28:12.947 [2024-05-12 05:08:20.021683] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.947 [2024-05-12 05:08:20.021812] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.947 [2024-05-12 05:08:20.021831] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:12.947 [2024-05-12 05:08:20.021844] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.058 ms 00:28:12.947 [2024-05-12 05:08:20.021855] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.947 [2024-05-12 05:08:20.033143] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.947 [2024-05-12 05:08:20.033192] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:28:12.947 [2024-05-12 05:08:20.033221] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.259 ms 00:28:12.947 [2024-05-12 05:08:20.033238] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.947 [2024-05-12 05:08:20.043475] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.947 [2024-05-12 05:08:20.043507] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:28:12.947 [2024-05-12 05:08:20.043535] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.198 ms 00:28:12.947 [2024-05-12 05:08:20.043544] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.947 [2024-05-12 05:08:20.053850] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.947 [2024-05-12 05:08:20.053898] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:12.947 [2024-05-12 05:08:20.053927] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.271 ms 00:28:12.948 [2024-05-12 05:08:20.053936] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.948 [2024-05-12 05:08:20.063810] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.948 [2024-05-12 05:08:20.063842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:12.948 [2024-05-12 05:08:20.063870] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.811 ms 00:28:12.948 [2024-05-12 05:08:20.063879] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.948 [2024-05-12 05:08:20.063914] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:12.948 [2024-05-12 05:08:20.063934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:12.948 [2024-05-12 05:08:20.063945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:12.948 [2024-05-12 05:08:20.063955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:12.948 [2024-05-12 05:08:20.063964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:12.948 [2024-05-12 05:08:20.063974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:12.948 [2024-05-12 05:08:20.063983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:12.948 [2024-05-12 05:08:20.063992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:12.948 [2024-05-12 05:08:20.064000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:12.948 [2024-05-12 05:08:20.064009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:12.948 [2024-05-12 05:08:20.064019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:12.948 [2024-05-12 05:08:20.064028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:12.948 [2024-05-12 05:08:20.064037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:12.948 [2024-05-12 05:08:20.064046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:12.948 [2024-05-12 05:08:20.064070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:12.948 [2024-05-12 05:08:20.064095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:12.948 [2024-05-12 05:08:20.064113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:12.948 [2024-05-12 05:08:20.064122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:12.948 [2024-05-12 05:08:20.064132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:12.948 [2024-05-12 05:08:20.064143] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:12.948 [2024-05-12 05:08:20.064167] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 403c0869-d652-4097-bbc5-81eee09aec87 00:28:12.948 [2024-05-12 05:08:20.064178] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:12.948 [2024-05-12 05:08:20.064187] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:28:12.948 [2024-05-12 05:08:20.064195] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:28:12.948 [2024-05-12 05:08:20.064205] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:28:12.948 [2024-05-12 05:08:20.064214] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:12.948 [2024-05-12 05:08:20.064224] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:12.948 [2024-05-12 05:08:20.064291] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:12.948 [2024-05-12 05:08:20.064302] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:12.948 [2024-05-12 05:08:20.064312] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:12.948 [2024-05-12 05:08:20.064328] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.948 [2024-05-12 05:08:20.064345] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:12.948 [2024-05-12 05:08:20.064357] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.416 ms 00:28:12.948 [2024-05-12 05:08:20.064368] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.208 [2024-05-12 05:08:20.081189] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.208 [2024-05-12 05:08:20.081274] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:13.208 [2024-05-12 05:08:20.081291] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 16.796 ms 00:28:13.208 [2024-05-12 05:08:20.081302] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.208 [2024-05-12 05:08:20.081543] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.208 [2024-05-12 05:08:20.081567] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:13.208 [2024-05-12 05:08:20.081586] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.216 ms 00:28:13.208 [2024-05-12 05:08:20.081596] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.208 [2024-05-12 05:08:20.128442] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:13.208 [2024-05-12 05:08:20.128501] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:13.208 [2024-05-12 05:08:20.128531] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:13.208 [2024-05-12 05:08:20.128541] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.208 [2024-05-12 05:08:20.128615] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:13.208 [2024-05-12 05:08:20.128628] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:13.208 [2024-05-12 05:08:20.128638] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:13.208 [2024-05-12 05:08:20.128647] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.208 [2024-05-12 05:08:20.128749] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:13.208 [2024-05-12 05:08:20.128766] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:13.208 [2024-05-12 05:08:20.128793] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:13.208 [2024-05-12 05:08:20.128818] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.208 [2024-05-12 05:08:20.128851] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:13.208 [2024-05-12 05:08:20.128868] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:13.208 [2024-05-12 05:08:20.128879] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:13.208 [2024-05-12 05:08:20.128888] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.208 [2024-05-12 05:08:20.211231] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:13.208 [2024-05-12 05:08:20.211275] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:13.208 [2024-05-12 05:08:20.211306] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:13.208 [2024-05-12 05:08:20.211316] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.208 [2024-05-12 05:08:20.241884] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:13.208 [2024-05-12 05:08:20.241919] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:13.208 [2024-05-12 05:08:20.241949] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:13.208 [2024-05-12 05:08:20.241959] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.208 [2024-05-12 05:08:20.242028] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:13.208 [2024-05-12 05:08:20.242043] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:13.208 [2024-05-12 05:08:20.242053] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:13.208 [2024-05-12 05:08:20.242062] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.208 [2024-05-12 05:08:20.242124] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:13.208 [2024-05-12 05:08:20.242148] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:13.208 [2024-05-12 05:08:20.242198] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:13.208 [2024-05-12 05:08:20.242208] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.208 [2024-05-12 05:08:20.242339] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:13.208 [2024-05-12 05:08:20.242359] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:13.208 [2024-05-12 05:08:20.242370] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:13.208 [2024-05-12 05:08:20.242380] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.208 [2024-05-12 05:08:20.242426] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:13.208 [2024-05-12 05:08:20.242441] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:13.208 [2024-05-12 05:08:20.242451] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:13.208 [2024-05-12 05:08:20.242467] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.208 [2024-05-12 05:08:20.242508] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:13.208 [2024-05-12 05:08:20.242527] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:13.208 [2024-05-12 05:08:20.242538] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:13.208 [2024-05-12 05:08:20.242548] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.208 [2024-05-12 05:08:20.242616] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:13.208 [2024-05-12 05:08:20.242632] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:13.208 [2024-05-12 05:08:20.242647] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:13.208 [2024-05-12 05:08:20.242657] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.208 [2024-05-12 05:08:20.242784] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 245.107 ms, result 0 00:28:14.145 05:08:21 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:14.145 05:08:21 -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:14.145 05:08:21 -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:28:14.145 05:08:21 -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:28:14.145 05:08:21 -- ftl/common.sh@181 -- # [[ -n '' ]] 00:28:14.145 05:08:21 -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:14.145 Remove shared memory files 00:28:14.145 05:08:21 -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:28:14.145 05:08:21 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:14.145 05:08:21 -- ftl/common.sh@205 -- # rm -f rm -f 00:28:14.145 05:08:21 -- ftl/common.sh@206 -- # rm -f rm -f 00:28:14.145 05:08:21 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid79384 00:28:14.145 05:08:21 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:14.145 05:08:21 -- ftl/common.sh@209 -- # rm -f rm -f 00:28:14.145 00:28:14.145 real 1m23.228s 00:28:14.145 user 1m59.819s 00:28:14.145 sys 0m20.972s 00:28:14.145 05:08:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:14.145 05:08:21 -- common/autotest_common.sh@10 -- # set +x 00:28:14.145 ************************************ 00:28:14.145 END TEST ftl_upgrade_shutdown 00:28:14.145 ************************************ 00:28:14.145 05:08:21 -- ftl/ftl.sh@82 -- # '[' -eq 1 ']' 00:28:14.145 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 82: [: -eq: unary operator expected 00:28:14.145 05:08:21 -- ftl/ftl.sh@89 -- # '[' -eq 1 ']' 00:28:14.145 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 89: [: -eq: unary operator expected 00:28:14.145 05:08:21 -- ftl/ftl.sh@1 -- # at_ftl_exit 00:28:14.145 05:08:21 -- ftl/ftl.sh@14 -- # killprocess 71821 00:28:14.145 05:08:21 -- common/autotest_common.sh@926 -- # '[' -z 71821 ']' 00:28:14.145 05:08:21 -- common/autotest_common.sh@930 -- # kill -0 71821 00:28:14.145 Process with pid 71821 is not found 00:28:14.145 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (71821) - No such process 00:28:14.145 05:08:21 -- common/autotest_common.sh@953 -- # echo 'Process with pid 71821 is not found' 00:28:14.145 05:08:21 -- ftl/ftl.sh@17 -- # [[ -n 0000:00:07.0 ]] 00:28:14.145 05:08:21 -- ftl/ftl.sh@19 -- # spdk_tgt_pid=79792 00:28:14.145 05:08:21 -- ftl/ftl.sh@20 -- # waitforlisten 79792 00:28:14.145 05:08:21 -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:14.145 05:08:21 -- common/autotest_common.sh@819 -- # '[' -z 79792 ']' 00:28:14.145 05:08:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:14.145 05:08:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:14.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:14.145 05:08:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:14.145 05:08:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:14.145 05:08:21 -- common/autotest_common.sh@10 -- # set +x 00:28:14.145 [2024-05-12 05:08:21.255993] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:14.145 [2024-05-12 05:08:21.256119] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79792 ] 00:28:14.404 [2024-05-12 05:08:21.410613] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:14.663 [2024-05-12 05:08:21.555863] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:14.663 [2024-05-12 05:08:21.556051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:15.233 05:08:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:15.233 05:08:22 -- common/autotest_common.sh@852 -- # return 0 00:28:15.233 05:08:22 -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:28:15.491 nvme0n1 00:28:15.491 05:08:22 -- ftl/ftl.sh@22 -- # clear_lvols 00:28:15.491 05:08:22 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:15.491 05:08:22 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:15.749 05:08:22 -- ftl/common.sh@28 -- # stores=7e6f19fb-df8d-4473-8850-939fefff2118 00:28:15.749 05:08:22 -- ftl/common.sh@29 -- # for lvs in $stores 00:28:15.749 05:08:22 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7e6f19fb-df8d-4473-8850-939fefff2118 00:28:16.007 05:08:22 -- ftl/ftl.sh@23 -- # killprocess 79792 00:28:16.007 05:08:22 -- common/autotest_common.sh@926 -- # '[' -z 79792 ']' 00:28:16.007 05:08:22 -- common/autotest_common.sh@930 -- # kill -0 79792 00:28:16.007 05:08:22 -- common/autotest_common.sh@931 -- # uname 00:28:16.007 05:08:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:16.007 05:08:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 79792 00:28:16.007 05:08:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:16.007 killing process with pid 79792 00:28:16.007 05:08:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:16.007 05:08:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 79792' 00:28:16.007 05:08:22 -- common/autotest_common.sh@945 -- # kill 79792 00:28:16.007 05:08:22 -- common/autotest_common.sh@950 -- # wait 79792 00:28:17.979 05:08:24 -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:17.979 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:17.979 Waiting for block devices as requested 00:28:17.979 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:28:17.979 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:28:17.979 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:28:18.239 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:28:23.511 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:28:23.511 Remove shared memory files 00:28:23.511 05:08:30 -- ftl/ftl.sh@28 -- # remove_shm 00:28:23.511 05:08:30 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:23.511 05:08:30 -- ftl/common.sh@205 -- # rm -f rm -f 00:28:23.511 05:08:30 -- ftl/common.sh@206 -- # rm -f rm -f 00:28:23.511 05:08:30 -- ftl/common.sh@207 -- # rm -f rm -f 00:28:23.511 05:08:30 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:23.511 05:08:30 -- ftl/common.sh@209 -- # rm -f rm -f 00:28:23.511 00:28:23.511 real 11m49.465s 00:28:23.511 user 14m41.714s 00:28:23.511 sys 1m24.470s 00:28:23.511 05:08:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:23.512 05:08:30 -- common/autotest_common.sh@10 -- # set +x 00:28:23.512 ************************************ 00:28:23.512 END TEST ftl 00:28:23.512 ************************************ 00:28:23.512 05:08:30 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:28:23.512 05:08:30 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:28:23.512 05:08:30 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:28:23.512 05:08:30 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:28:23.512 05:08:30 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:28:23.512 05:08:30 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:28:23.512 05:08:30 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:28:23.512 05:08:30 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:28:23.512 05:08:30 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:28:23.512 05:08:30 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:28:23.512 05:08:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:23.512 05:08:30 -- common/autotest_common.sh@10 -- # set +x 00:28:23.512 05:08:30 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:28:23.512 05:08:30 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:28:23.512 05:08:30 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:28:23.512 05:08:30 -- common/autotest_common.sh@10 -- # set +x 00:28:24.888 INFO: APP EXITING 00:28:24.888 INFO: killing all VMs 00:28:24.888 INFO: killing vhost app 00:28:24.888 INFO: EXIT DONE 00:28:25.454 lsblk: /dev/nvme0c0n1: not a block device 00:28:25.454 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:25.711 0000:00:09.0 (1b36 0010): Already using the nvme driver 00:28:25.711 0000:00:08.0 (1b36 0010): Already using the nvme driver 00:28:25.711 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:28:25.711 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:28:26.645 lsblk: /dev/nvme0c0n1: not a block device 00:28:26.646 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:26.646 Cleaning 00:28:26.646 Removing: /var/run/dpdk/spdk0/config 00:28:26.646 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:26.646 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:26.646 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:26.646 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:26.646 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:26.646 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:26.646 Removing: /var/run/dpdk/spdk0 00:28:26.646 Removing: /var/run/dpdk/spdk_pid56709 00:28:26.646 Removing: /var/run/dpdk/spdk_pid56913 00:28:26.646 Removing: /var/run/dpdk/spdk_pid57207 00:28:26.646 Removing: /var/run/dpdk/spdk_pid57311 00:28:26.646 Removing: /var/run/dpdk/spdk_pid57405 00:28:26.646 Removing: /var/run/dpdk/spdk_pid57515 00:28:26.646 Removing: /var/run/dpdk/spdk_pid57616 00:28:26.646 Removing: /var/run/dpdk/spdk_pid57661 00:28:26.646 Removing: /var/run/dpdk/spdk_pid57692 00:28:26.646 Removing: /var/run/dpdk/spdk_pid57759 00:28:26.646 Removing: /var/run/dpdk/spdk_pid57865 00:28:26.646 Removing: /var/run/dpdk/spdk_pid58299 00:28:26.646 Removing: /var/run/dpdk/spdk_pid58375 00:28:26.646 Removing: /var/run/dpdk/spdk_pid58451 00:28:26.646 Removing: /var/run/dpdk/spdk_pid58473 00:28:26.646 Removing: /var/run/dpdk/spdk_pid58608 00:28:26.646 Removing: /var/run/dpdk/spdk_pid58632 00:28:26.646 Removing: /var/run/dpdk/spdk_pid58767 00:28:26.646 Removing: /var/run/dpdk/spdk_pid58795 00:28:26.646 Removing: /var/run/dpdk/spdk_pid58855 00:28:26.646 Removing: /var/run/dpdk/spdk_pid58886 00:28:26.646 Removing: /var/run/dpdk/spdk_pid58939 00:28:26.646 Removing: /var/run/dpdk/spdk_pid58970 00:28:26.646 Removing: /var/run/dpdk/spdk_pid59147 00:28:26.646 Removing: /var/run/dpdk/spdk_pid59184 00:28:26.646 Removing: /var/run/dpdk/spdk_pid59258 00:28:26.646 Removing: /var/run/dpdk/spdk_pid59343 00:28:26.646 Removing: /var/run/dpdk/spdk_pid59374 00:28:26.646 Removing: /var/run/dpdk/spdk_pid59450 00:28:26.646 Removing: /var/run/dpdk/spdk_pid59476 00:28:26.646 Removing: /var/run/dpdk/spdk_pid59522 00:28:26.905 Removing: /var/run/dpdk/spdk_pid59548 00:28:26.905 Removing: /var/run/dpdk/spdk_pid59595 00:28:26.905 Removing: /var/run/dpdk/spdk_pid59621 00:28:26.905 Removing: /var/run/dpdk/spdk_pid59662 00:28:26.905 Removing: /var/run/dpdk/spdk_pid59688 00:28:26.905 Removing: /var/run/dpdk/spdk_pid59739 00:28:26.905 Removing: /var/run/dpdk/spdk_pid59766 00:28:26.905 Removing: /var/run/dpdk/spdk_pid59807 00:28:26.905 Removing: /var/run/dpdk/spdk_pid59833 00:28:26.905 Removing: /var/run/dpdk/spdk_pid59874 00:28:26.905 Removing: /var/run/dpdk/spdk_pid59906 00:28:26.905 Removing: /var/run/dpdk/spdk_pid59947 00:28:26.905 Removing: /var/run/dpdk/spdk_pid59977 00:28:26.905 Removing: /var/run/dpdk/spdk_pid60025 00:28:26.905 Removing: /var/run/dpdk/spdk_pid60051 00:28:26.905 Removing: /var/run/dpdk/spdk_pid60092 00:28:26.905 Removing: /var/run/dpdk/spdk_pid60119 00:28:26.905 Removing: /var/run/dpdk/spdk_pid60170 00:28:26.905 Removing: /var/run/dpdk/spdk_pid60196 00:28:26.905 Removing: /var/run/dpdk/spdk_pid60237 00:28:26.905 Removing: /var/run/dpdk/spdk_pid60269 00:28:26.905 Removing: /var/run/dpdk/spdk_pid60310 00:28:26.905 Removing: /var/run/dpdk/spdk_pid60341 00:28:26.905 Removing: /var/run/dpdk/spdk_pid60388 00:28:26.905 Removing: /var/run/dpdk/spdk_pid60414 00:28:26.905 Removing: /var/run/dpdk/spdk_pid60466 00:28:26.905 Removing: /var/run/dpdk/spdk_pid60492 00:28:26.905 Removing: /var/run/dpdk/spdk_pid60539 00:28:26.905 Removing: /var/run/dpdk/spdk_pid60569 00:28:26.905 Removing: /var/run/dpdk/spdk_pid60611 00:28:26.905 Removing: /var/run/dpdk/spdk_pid60640 00:28:26.905 Removing: /var/run/dpdk/spdk_pid60690 00:28:26.905 Removing: /var/run/dpdk/spdk_pid60724 00:28:26.905 Removing: /var/run/dpdk/spdk_pid60774 00:28:26.905 Removing: /var/run/dpdk/spdk_pid60800 00:28:26.905 Removing: /var/run/dpdk/spdk_pid60851 00:28:26.905 Removing: /var/run/dpdk/spdk_pid60878 00:28:26.905 Removing: /var/run/dpdk/spdk_pid60920 00:28:26.905 Removing: /var/run/dpdk/spdk_pid61002 00:28:26.905 Removing: /var/run/dpdk/spdk_pid61112 00:28:26.905 Removing: /var/run/dpdk/spdk_pid61279 00:28:26.905 Removing: /var/run/dpdk/spdk_pid61376 00:28:26.905 Removing: /var/run/dpdk/spdk_pid61418 00:28:26.905 Removing: /var/run/dpdk/spdk_pid61886 00:28:26.905 Removing: /var/run/dpdk/spdk_pid62014 00:28:26.905 Removing: /var/run/dpdk/spdk_pid62118 00:28:26.905 Removing: /var/run/dpdk/spdk_pid62177 00:28:26.905 Removing: /var/run/dpdk/spdk_pid62201 00:28:26.905 Removing: /var/run/dpdk/spdk_pid62272 00:28:26.905 Removing: /var/run/dpdk/spdk_pid62973 00:28:26.905 Removing: /var/run/dpdk/spdk_pid63016 00:28:26.905 Removing: /var/run/dpdk/spdk_pid63528 00:28:26.905 Removing: /var/run/dpdk/spdk_pid63632 00:28:26.905 Removing: /var/run/dpdk/spdk_pid63741 00:28:26.905 Removing: /var/run/dpdk/spdk_pid63794 00:28:26.905 Removing: /var/run/dpdk/spdk_pid63825 00:28:26.905 Removing: /var/run/dpdk/spdk_pid63845 00:28:26.905 Removing: /var/run/dpdk/spdk_pid65806 00:28:26.905 Removing: /var/run/dpdk/spdk_pid65962 00:28:26.905 Removing: /var/run/dpdk/spdk_pid65966 00:28:26.905 Removing: /var/run/dpdk/spdk_pid65978 00:28:26.905 Removing: /var/run/dpdk/spdk_pid66031 00:28:26.905 Removing: /var/run/dpdk/spdk_pid66036 00:28:26.905 Removing: /var/run/dpdk/spdk_pid66048 00:28:26.905 Removing: /var/run/dpdk/spdk_pid66095 00:28:26.905 Removing: /var/run/dpdk/spdk_pid66103 00:28:26.905 Removing: /var/run/dpdk/spdk_pid66116 00:28:26.905 Removing: /var/run/dpdk/spdk_pid66161 00:28:26.905 Removing: /var/run/dpdk/spdk_pid66165 00:28:26.905 Removing: /var/run/dpdk/spdk_pid66177 00:28:26.905 Removing: /var/run/dpdk/spdk_pid67632 00:28:26.905 Removing: /var/run/dpdk/spdk_pid67743 00:28:26.905 Removing: /var/run/dpdk/spdk_pid67884 00:28:26.905 Removing: /var/run/dpdk/spdk_pid68000 00:28:26.905 Removing: /var/run/dpdk/spdk_pid68119 00:28:26.905 Removing: /var/run/dpdk/spdk_pid68235 00:28:26.905 Removing: /var/run/dpdk/spdk_pid68372 00:28:26.905 Removing: /var/run/dpdk/spdk_pid68452 00:28:26.905 Removing: /var/run/dpdk/spdk_pid68591 00:28:26.905 Removing: /var/run/dpdk/spdk_pid68985 00:28:26.905 Removing: /var/run/dpdk/spdk_pid69026 00:28:26.905 Removing: /var/run/dpdk/spdk_pid69489 00:28:26.905 Removing: /var/run/dpdk/spdk_pid69683 00:28:27.165 Removing: /var/run/dpdk/spdk_pid69784 00:28:27.165 Removing: /var/run/dpdk/spdk_pid69894 00:28:27.165 Removing: /var/run/dpdk/spdk_pid69941 00:28:27.165 Removing: /var/run/dpdk/spdk_pid69971 00:28:27.165 Removing: /var/run/dpdk/spdk_pid70337 00:28:27.165 Removing: /var/run/dpdk/spdk_pid70406 00:28:27.165 Removing: /var/run/dpdk/spdk_pid70486 00:28:27.165 Removing: /var/run/dpdk/spdk_pid70878 00:28:27.165 Removing: /var/run/dpdk/spdk_pid71025 00:28:27.165 Removing: /var/run/dpdk/spdk_pid71821 00:28:27.165 Removing: /var/run/dpdk/spdk_pid71950 00:28:27.165 Removing: /var/run/dpdk/spdk_pid72147 00:28:27.165 Removing: /var/run/dpdk/spdk_pid72251 00:28:27.165 Removing: /var/run/dpdk/spdk_pid72607 00:28:27.165 Removing: /var/run/dpdk/spdk_pid72873 00:28:27.165 Removing: /var/run/dpdk/spdk_pid73232 00:28:27.165 Removing: /var/run/dpdk/spdk_pid73428 00:28:27.165 Removing: /var/run/dpdk/spdk_pid73564 00:28:27.165 Removing: /var/run/dpdk/spdk_pid73635 00:28:27.165 Removing: /var/run/dpdk/spdk_pid73784 00:28:27.165 Removing: /var/run/dpdk/spdk_pid73815 00:28:27.165 Removing: /var/run/dpdk/spdk_pid73892 00:28:27.165 Removing: /var/run/dpdk/spdk_pid74094 00:28:27.165 Removing: /var/run/dpdk/spdk_pid74349 00:28:27.165 Removing: /var/run/dpdk/spdk_pid74818 00:28:27.165 Removing: /var/run/dpdk/spdk_pid75301 00:28:27.165 Removing: /var/run/dpdk/spdk_pid75782 00:28:27.165 Removing: /var/run/dpdk/spdk_pid76335 00:28:27.165 Removing: /var/run/dpdk/spdk_pid76485 00:28:27.165 Removing: /var/run/dpdk/spdk_pid76580 00:28:27.165 Removing: /var/run/dpdk/spdk_pid77283 00:28:27.165 Removing: /var/run/dpdk/spdk_pid77351 00:28:27.165 Removing: /var/run/dpdk/spdk_pid77826 00:28:27.165 Removing: /var/run/dpdk/spdk_pid78252 00:28:27.165 Removing: /var/run/dpdk/spdk_pid78807 00:28:27.165 Removing: /var/run/dpdk/spdk_pid78927 00:28:27.165 Removing: /var/run/dpdk/spdk_pid78982 00:28:27.165 Removing: /var/run/dpdk/spdk_pid79047 00:28:27.165 Removing: /var/run/dpdk/spdk_pid79111 00:28:27.165 Removing: /var/run/dpdk/spdk_pid79181 00:28:27.165 Removing: /var/run/dpdk/spdk_pid79384 00:28:27.165 Removing: /var/run/dpdk/spdk_pid79429 00:28:27.165 Removing: /var/run/dpdk/spdk_pid79497 00:28:27.165 Removing: /var/run/dpdk/spdk_pid79571 00:28:27.165 Removing: /var/run/dpdk/spdk_pid79610 00:28:27.165 Removing: /var/run/dpdk/spdk_pid79678 00:28:27.165 Removing: /var/run/dpdk/spdk_pid79792 00:28:27.165 Clean 00:28:27.165 killing process with pid 48365 00:28:27.165 killing process with pid 48368 00:28:27.424 05:08:34 -- common/autotest_common.sh@1436 -- # return 0 00:28:27.424 05:08:34 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:28:27.424 05:08:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:27.424 05:08:34 -- common/autotest_common.sh@10 -- # set +x 00:28:27.424 05:08:34 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:28:27.424 05:08:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:27.424 05:08:34 -- common/autotest_common.sh@10 -- # set +x 00:28:27.424 05:08:34 -- spdk/autotest.sh@390 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:27.424 05:08:34 -- spdk/autotest.sh@392 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:28:27.424 05:08:34 -- spdk/autotest.sh@392 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:28:27.424 05:08:34 -- spdk/autotest.sh@394 -- # hash lcov 00:28:27.424 05:08:34 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:28:27.424 05:08:34 -- spdk/autotest.sh@396 -- # hostname 00:28:27.424 05:08:34 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1705279005-2131 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:28:27.683 geninfo: WARNING: invalid characters removed from testname! 00:28:49.617 05:08:54 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:50.553 05:08:57 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:53.086 05:08:59 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:54.990 05:09:02 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:57.523 05:09:04 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:00.056 05:09:06 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:01.959 05:09:08 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:01.959 05:09:08 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:01.959 05:09:08 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:29:01.960 05:09:08 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:01.960 05:09:08 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:01.960 05:09:08 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.960 05:09:08 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.960 05:09:08 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.960 05:09:08 -- paths/export.sh@5 -- $ export PATH 00:29:01.960 05:09:08 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.960 05:09:08 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:29:01.960 05:09:08 -- common/autobuild_common.sh@435 -- $ date +%s 00:29:01.960 05:09:08 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1715490548.XXXXXX 00:29:01.960 05:09:08 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1715490548.TOwyx7 00:29:01.960 05:09:08 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:29:01.960 05:09:08 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:29:01.960 05:09:08 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:29:01.960 05:09:08 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:29:01.960 05:09:08 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:29:01.960 05:09:08 -- common/autobuild_common.sh@451 -- $ get_config_params 00:29:01.960 05:09:08 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:29:01.960 05:09:08 -- common/autotest_common.sh@10 -- $ set +x 00:29:01.960 05:09:08 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:29:01.960 05:09:08 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:29:01.960 05:09:08 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:29:01.960 05:09:08 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:29:01.960 05:09:08 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:29:01.960 05:09:08 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:29:01.960 05:09:08 -- spdk/autopackage.sh@19 -- $ timing_finish 00:29:01.960 05:09:08 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:01.960 05:09:08 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:29:01.960 05:09:08 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:01.960 05:09:08 -- spdk/autopackage.sh@20 -- $ exit 0 00:29:01.960 + [[ -n 5179 ]] 00:29:01.960 + sudo kill 5179 00:29:02.229 [Pipeline] } 00:29:02.248 [Pipeline] // timeout 00:29:02.254 [Pipeline] } 00:29:02.273 [Pipeline] // stage 00:29:02.278 [Pipeline] } 00:29:02.297 [Pipeline] // catchError 00:29:02.306 [Pipeline] stage 00:29:02.309 [Pipeline] { (Stop VM) 00:29:02.323 [Pipeline] sh 00:29:02.603 + vagrant halt 00:29:05.168 ==> default: Halting domain... 00:29:11.743 [Pipeline] sh 00:29:12.024 + vagrant destroy -f 00:29:14.559 ==> default: Removing domain... 00:29:15.139 [Pipeline] sh 00:29:15.419 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:29:15.427 [Pipeline] } 00:29:15.443 [Pipeline] // stage 00:29:15.447 [Pipeline] } 00:29:15.463 [Pipeline] // dir 00:29:15.468 [Pipeline] } 00:29:15.485 [Pipeline] // wrap 00:29:15.491 [Pipeline] } 00:29:15.505 [Pipeline] // catchError 00:29:15.514 [Pipeline] stage 00:29:15.516 [Pipeline] { (Epilogue) 00:29:15.530 [Pipeline] sh 00:29:15.811 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:21.096 [Pipeline] catchError 00:29:21.098 [Pipeline] { 00:29:21.113 [Pipeline] sh 00:29:21.396 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:21.655 Artifacts sizes are good 00:29:21.665 [Pipeline] } 00:29:21.683 [Pipeline] // catchError 00:29:21.696 [Pipeline] archiveArtifacts 00:29:21.702 Archiving artifacts 00:29:21.846 [Pipeline] cleanWs 00:29:21.859 [WS-CLEANUP] Deleting project workspace... 00:29:21.859 [WS-CLEANUP] Deferred wipeout is used... 00:29:21.881 [WS-CLEANUP] done 00:29:21.883 [Pipeline] } 00:29:21.901 [Pipeline] // stage 00:29:21.907 [Pipeline] } 00:29:21.923 [Pipeline] // node 00:29:21.928 [Pipeline] End of Pipeline 00:29:21.964 Finished: SUCCESS